Data Runs Deep Spotlight: Kendra Vant talks to Portable’s Sarah Kaur on data and AI

"Now, almost everyone I meet has an opinion about AI. This makes me optimistic because meaningful oversight and responsible AI development require exactly this kind of broad civic dialogue." - Sarah Kaur

Kendra Vant, one of the top voices in how to harness AI to solve real business problems, spoke to Portable's own Sarah Kaur on the future of data and AI. Kendra is known for seamlessly threading data and AI into products that both delight customers and open new markets—no small feat in a world where hype can often overshadow reality.

We’re delighted to share this cross-post from her blog, Data Runs Deeps. We encourage you to check it out for more insights into the latest and greatest in AI, data strategy and building products that truly make a difference. Enjoy!

Sarah, 2024 was another busy year for data & AI. What’s one development / milestone / news story that really caught your eye?

One story is how the massive energy demands of generative AI are pushing big tech companies invest in nuclear power. Microsoft plans to revive the Three Mile Island nuclear plant to power its AI operations, Google has contracted with Kairos Power to buy energy from small modular reactors for its data centers, and Amazon is investing in four new nuclear facilities with Energy Northwest. Why nuclear over renewables? For companies needing round-the-clock carbon-free energy by 2030, nuclear offers what solar and wind cannot: stable, continuous carbon-free power.

While this nuclear pivot could help launch new power generation technologies globally, we have ongoing concerns about safety, community acceptance, and how to safely store for radioactive waste. But it could be transformative! Not just for powering AI development, also for advancing climate solutions through AI-driven investment in nuclear solutions and grid management for a net-zero future.

You’ve been working in and around data & AI for a while now. Many things have changed! But tell us about something that was true when you started out in this space and is still important today.

One constant throughout my journey in AI has been the fundamental importance of human-centered design and ethical considerations in AI development. When I started in this field, I was driven by AI's potential to improve human services - from access to justice to mental health support. In this context of how AI serves human needs, I knew that the most critical decisions in AI development weren't just technical ones about model accuracy or data fitting. Nope, AI development involves complex design decisions that go far beyond technical specifications. These include value judgments about which problems to prioritise solving, how to understand and compare the tradeoffs in the solutions, and overall, how to ensure AI systems align with our needs and expectations.

Seven years ago, securing funding for ethics reviews and technical accuracy checks in AI systems required advocacy from people who cared enough to convince those with the pursestrings. This is still the case, even though today, the conversation around responsible AI has become mainstream - perhaps even dominating professional discussions in 2024. It’s progress - we can’t stop demanding it though!

It’s been a heady couple of years with 2024 almost as frothy as 2023. What's one common misconception about AI that you wish would go away?

One long-held belief I want to challenge is the oversimplified notion that "bias in AI is always bad." While it's crucial to acknowledge and understand the biases present in foundation models and LLMs, as well as classical ML, treating bias as universally negative means we may overlook working with bias as a feature, not a bug, for deliberate positive outcomes.

I believe AI systems' biases can be recognised, managed, and sometimes even leveraged constructively. Through my work with the Diversity and Inclusion in AI team at CSIRO, I've discovered that LLMs, trained on vast amounts of diverse data, can actually help explore perspectives different from our own, potentially broadening our understanding of various human experiences and needs.

So the opportunity space for me, is working with bias.

To make this more concrete...this could be explored with a use case in recruitment. In recruitment for leadership roles, we might find that an AI system reflects historical biases toward male candidates. Instead of just trying to neutralise or “eliminate” this bias, we can deliberately engineer the system to recognise these patterns and implement corrective weighting - essentially using our understanding of the bias to create positive action. Or we might even work with an LLM when we’re writing a job ad, using an understanding of gendered language patterns to identify and rewrite job advertisements to be more inclusive. Through prompting, an LLM could recognise traditionally masculine-coded language and suggest more balanced alternatives that appeal to all candidates while maintaining the role's requirements. We could even use the system to review required criteria and highlight where unnecessarily rigid requirements might be deterring diverse candidates.

Seeing as we’re unlikely to eliminate bias in LLMs, this approach shifts us from blanket condemnation and avoidance to critical engagement with bias - using it as a lens to understand societal patterns and actively work toward more equitable outcomes.

Who do you follow to stay up to date with what’s changing in the world of data & AI?

For “fast updates” about the day-to-day developments in the AI eco-system, I like The AI Daily Brief for big tech news and AI advancements, and I follow Australian thought leaders like consumer rights advocate Kate Bower and Sam Burrett for digestible posts on the flood of reports on AI and productivity.

I also enjoy “slower” reading of books like Co-Intelligence: Living and Working with AI by Ethan Mollick, or The Singularity Is Nearer by Ray Kurzweil for practical and speculative provocations for living better with AI.

And sometimes, I think we need to slow down more and read “older” books like Kate Crawford’s Atlas of AI to understand the costs of production, and concentration of power to determine what we experience as “AI”. Another one I love pre-dates our AI fascination - Scott Rosenberg’s Dreaming in Code, chronicling the problems encountered by software developers in an open-source calendar project because it talks to patterns we see in AI development today: the tendency to underestimate real-world complexity, the tension between grand visions and practical implementation, and the realisation that development involves not just technical decisions but fundamental questions about how people think and work.

The full conversation is available in Kendra's substack, Data Runs Deep, and you can find it here.

Sign up to our email newsletter to get updates about our events, work and research

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.