The must-listen perspectives on data and AI

The must-listen perspectives on data and AI

More insights from the greatest minds in data science, now on High Signal

Duncan Gilchrist
Jeremy Hermann
June 20, 2025

Summer reading lists are great (unless they’re full of AI-generated recommendations that don’t actually exist)...but have you considered a summer listening list?

We've got something better than the latest business book: conversations with the people shaping the future of data and AI. Since our last podcast highlight reel in December, High Signal has had the privilege of hosting some of the most influential voices in our field:

 

  • Tim O'Reilly, who has been tracking tech’s next big thing for the last four decades, explained why AI isn't ending programming but democratizing it. 
  • Fei-Fei Li — often called the "Godmother of AI" — shared her vision of AI as a civilizational technology that could reshape society as profoundly as fire or writing. 
  • Peter Wang, the architect behind much of the open source infrastructure that powers modern data science, warned us about a data quality crisis hiding behind all the AI hype.

What makes these conversations special isn't just the caliber of guests (though that doesn't hurt), or our talented host Hugo Bowne-Anderson. It's that they're willing to go beyond the surface-level takes and share the nuanced, sometimes contrarian perspectives that come from years of actually building these systems, making these mistakes, and learning these lessons the hard way. 

And nearly every guest, in their own way, comes back to two fundamental truths. First, data quality — not model sophistication — remains the biggest challenge facing today’s leaders. Second, human judgment and expertise are becoming more essential than ever. 

So skip the beach read. Here are nine conversations that will change how you think about data, AI, and what comes next.

8/ Elena Grewal on making data-driven decisions without perfect experiments 

Dr. Elena Grewal led Airbnb's data science team from the ground up over seven years, building it into a 200-person organization. Now teaching at Yale and running an ice cream shop in New Haven, she shared how waiting for perfect experimental conditions can prevent organizations from getting the insights they need to make better decisions.

Elena argues that experimentation exists on a spectrum, and companies often get paralyzed waiting for statistically significant A/B tests with perfect control groups. In reality, she says, you can gain valuable insights from much simpler approaches. At Airbnb, Elena's team started fraud prevention with what she calls "a very simple heuristic" — basic rules to flag potentially risky users — that eventually evolved into more sophisticated models overseen by a 30-person team. The key insight: they didn't wait for the perfect fraud detection system before taking action.

"The reality is, it’s really hard to achieve an experiment that's well-powered, with great statistical significance, and a perfect control and treatment group. Especially early on, if you don't have enough people using your product. But it’s still helpful to run some sort of test and see, even if you don't have the most power, is there anything that you can glean?"

Hear more in Elena's full episode — including how she runs imperfect experiments at her ice cream shop — on Apple Podcasts, Spotify, YouTube, or check out the show notes.

9/ Eric Colson on why data scientists should drive ideas, not just respond to requests

Eric Colson, data science advisor and former leader at Netflix and Stitch Fix, shared his concern that most companies severely limit their data teams' potential by treating them as support functions — essentially turning skilled data scientists into order-takers for business requests. 

At Stitch Fix, for example, a data scientist exploring customer-selected style profiles like "edgy" and "preppy" discovered something surprising: all groups were buying similar items. The self-reported preferences were meaningless for predicting behavior. So she created customer segments based on actual behavior rather than stated preferences, which transformed their recommendation engines, inventory management, and marketing messaging.

The key insight? This breakthrough came from curiosity, not from a business request.

"The main challenge is a lot of companies treat their data scientists as a support function... The ideas are coming from the business teams to the data scientist, which could have some value, but it leaves a lot on the table when we don’t get ideas from the other direction — from data scientists."

Hear more in Eric's full episode on Apple Podcasts, Spotify, YouTube, or check out the show notes.

10/ Ari Kaplan on why data intelligence matters more than artificial intelligence

Ari Kaplan is Databricks' Global Head of Evangelism and a pioneer in sports analytics (known as ‘the real moneyball guy’). After years building and advising data teams, Ari has pinpointed a crucial distinction between artificial intelligence and "data intelligence": the ability to make sense of your data first before you can effectively implement AI. 

Ari describes a challenge that will be familiar to anyone working at mid-sized companies: data sprawl that makes it nearly impossible to find what you need, and no clear way to understand what data actually exists or means. This data chaos, Ari argues, is where modern data intelligence platforms can provide real value — not by building fancier models, but by helping organizations understand what they already have before they try to do anything sophisticated with it.

"In a large company, you may have 10,000 tables or more with hundreds of thousands of columns. Before long, you have 20 tables that have the word sales in the title: sales_21, sales_NE. Is that Northeast? Is that New England? Is that Nebraska? And you need humans to dig through the data. But with data intelligence, you can use GenAI to understand what your data is saying…the real challenge for enterprises today isn't AI, it's making sense of your data in the first place."

Hear more in Ari's full episode on Apple Podcasts, Spotify, YouTube, or check out the show notes.

11/ Peter Wang on the data crisis hiding behind AI hype

Peter Wang, Chief AI Officer at Anaconda and a key figure behind PyData, is shaping the open source ecosystem that made modern data science and AI possible — but isn’t impressed with what he sees happening to the data foundations that power these systems.

While everyone's obsessing over the latest models, Peter says we're neglecting the underlying data infrastructure. Most concerning is the state of training data for LLMs, which he describes as being in a "Chernobyl graphite fire" state — a complete disaster that few want to address openly. Companies know they’re training on problematic data, but nobody wants to talk about their sources. According to Peter, this has created a dangerous blind spot where the most critical foundation work is being treated as an afterthought.

"Five or ten years ago, you'd be at a conference, and everyone's shouting, 'We need data!' and you have to whisper 'You should really look at AI.' But now it's the opposite. Everyone's shouting 'Let's do something with Gen AI!' and you have to whisper, 'You still need good data, solid data as the foundation for it.'"

Hear more in Peter's full episode on Apple Podcasts, Spotify, YouTube, or check out the show notes.

12/ Stefan Wager on why bad data-driven decisions are worse than no data

Stanford professor Stefan Wager is an expert in causal machine learning, and has advised companies like Uber, Google, and Facebook on moving beyond simple prediction to actually understanding causation and making better decisions. But Stefan warns that some companies make a critical error when adopting data science: they treat it as a replacement for human judgment rather than an enhancement. 

Organizations often have leaders with years of hard-won experience who understand their markets, customers, and operations at a deep level. But in the rush to become "data-driven," companies frequently throw away this institutional knowledge in favor of running predictive algorithms without deeper reasoning. This approach, Stefan says, can lead to decisions that are more harmful than helpful. 

"Business leaders often have strong instincts for cause and consequence, counterfactuals, and business dynamics. When you get into data-driven methods, everything is harder because you're trying to do things quantitatively, precisely, more abstractly. But the biggest mistake you can make is: if you want to use data, and all you know how to do is run predictive algorithms, then you forget about everything that's important. That's the worst thing you can do. I'd rather you not use data than use data the wrong way."

Hear more in Stefan's full episode on Apple Podcasts, Spotify, YouTube, or check out the show notes.

13/ Tim O'Reilly on why AI isn't ending programming — it's democratizing access to it

Tim O'Reilly is the founder of O'Reilly Media and one of the most influential voices in the history of technology. He's coined terms like "Web 2.0" and has spent decades identifying and explaining major technological shifts. Now, as we face another inflection point with AI, Tim argues that we're witnessing something far more familiar than revolutionary.

Tim reframes the current AI panic (the end of programming as we know it!) by placing it in historical context. AI is simply the next layer in the decades-long evolution of computing becoming more accessible to humans. He draws parallels to previous transformations that seemed disruptive at the time: from assembler to high-level languages, from batch processing to interactive computing, from command lines to graphical interfaces to the web.

Each of these shifts expanded who could use computing technology rather than eliminating the need for technical skills. Instead of viewing AI as a sudden revolution that will eliminate programming jobs, Tim says we should see them as extending a trend that has consistently created more opportunities than it destroyed.

"I don't think it's making programming go away. It's making programming much easier so more people can do it, just like a compiler or an interpreter made programming a lot easier... We've just added another layer to the stack. Each time that has happened, more people can access the technology, more people can do cool things."

Hear more in Tim's full episode on Apple Podcasts, Spotify, YouTube, or check out the show notes.

14/ Barr Moses on the data quality crisis companies aren't talking about

Barr Moses is the CEO and co-founder of Monte Carlo, where she coined the term "data downtime" and has become a leading voice on data reliability. Through her work with hundreds of companies, she's identified an emerging challenge as everyone races to adopt AI: many are focused on the wrong differentiator. 

Leveraging the best foundational models isn't the moat — they're increasingly commoditized and accessible to everyone. Instead, the real competitive advantage lies in the quality and uniqueness of your data. But Monte Carlo’s recent survey reveals that while 100% of data leaders feel pressure to build with AI, only about a third believe their data is actually ready for it

"I think the reality is that today anyone has access to the latest and greatest model. Within a couple of minutes, we can all get an API key and off we go. In that world, what is our moat? As organizations, what is our competitive advantage? What we are seeing and hearing, time and again, from enterprise is that your moat is your data."

Hear more in Barr's full episode on Apple Podcasts, Spotify, YouTube, or check out the show notes.

15/ Eoin O'Mahony on why good metrics don't guarantee good decisions

Now a partner at Lightspeed, Eoin O'Mahony was once my (Duncan’s) close partner at Uber, where he led teams across marketplace dynamics, pricing, and experimentation. His experience building data systems at massive scale taught him a counterintuitive lesson: positive metrics can be meaningless — and dangerous — if you don't understand the mechanism behind them.

Eoin explains that it’s possible for metrics to suggest you're improving things when you're actually harming them. This can happen due to network effects, seasonality, or confounding variables that make causation difficult to determine. At Uber, I watched him block product launches — even when metrics looked positive — if his team couldn't explain mechanically why the change worked. 

"One of the things that I really learned to appreciate in my time at Uber is the difficulty and nuance in measurement. Not so much in getting your measurement wrong, but that if you're not careful about how you set things up, you can get the sign of your measurement wrong. And you can end up doing the opposite of what you should be doing.”

Hear more in Eoin's full episode on Apple Podcasts, Spotify, YouTube, or check out the show notes.

16/ Fei-Fei Li on AI as a civilizational technology

Talk about a dream guest. Fei-Fei Li — co-director of the Stanford Human-Centered AI Institute and often called the "Godmother of AI" — shared her unique perspective on both AI's technical capabilities and its broader implications for society.

Fei-Fei argues that AI represents a societal shift comparable to fire, writing, or electricity. This scale of impact requires us to put human values at the center of AI development from the beginning, not as an afterthought. She outlines human-centered AI as consisting of three concentric circles: individual dignity and agency, community empowerment (like ensuring AI augments rather than replaces human creativity), and societal prosperity. 

This framework helps organizations think beyond technical capabilities to ensure AI systems drive shared benefits rather than concentrated power.

"AI is a civilizational technology. We now know there's very little doubt that AI's impact on our society is transformational. This has to do with jobs, with the way governments are impacted, it touches on geopolitics... How do we make sure this technology doesn't tear our society apart? How do we ensure shared prosperity? These are bigger societal problems that have to do with human-centered AI."

Hear more in Fei-Fei's full episode on Apple Podcasts, Spotify, YouTube, or check out the show notes.

Stay tuned for even more High Signal

Have a data leader you’d love to hear on High Signal? Let us know on LinkedIn. And be the first to hear from our upcoming guests by subscribing on Apple Podcasts, Spotify, and YouTube.

All Blog Posts