One of the reasons I haven’t updated this Substack recently is that I’ve been busy with a related project. I’m excited to announce that my podcast, The Road to Accountable AI, will be launching Thursday, April 18.
Why a podcast?
When I first started studying legal and ethical questions for AI and algorithmic decision-making, nearly a decade ago, the research and policy community in the area was small. And it was pretty much just a research and policy community. Companies were making missteps in their AI deployments. But few of them, especially outside the big tech platforms, had experts in-house focusing on AI governance and responsibility, and the first prominent algorithmic auditing and AI ethics advisory firms hadn’t yet been founded. It was easy to find examples of AI gone wrong, but challenging to identify examples of AI done right.
Today, there are thousands of scholars and other researchers from around the world attending major conferences on the technical aspects of AI accountability, dozens of high-profile policy-oriented organizations around the world focusing on AI regulation and AI safety, a growing number of prominent companies with teams and senior executives dedicated to responsible AI, and AI governance initiatives at many more.
Yet here’s the funny thing. When I talk to some of the experts I most respect in this field, they still say to me, “It’s a small community. We tend to talk to the same people.” I don’t feel that way at all; I’m constantly finding brilliant experts who are deeply engaged on the ground in making AI systems more trustworthy. Perhaps that’s because I’ve always been most comfortable in what academics call the “liminal spaces” between different worlds. Those communities tend not to talk to one another, even when they would benefit from the interaction. And nowadays, the field is much more than just identifying problems and principles; companies are making real investments and developing distinctive solutions for AI Accountability.
I’ve always learned best, and fed my academic research, by interacting with smart people and connecting them. In the AI accountability context, there are many glossy consulting firm reports, detailed policy whitepapers, and highbrow statements of principles. But if you want to understand what’s really going on, you need to listen to those actually building solutions.
So I’m doing that. And sharing the conversations. Each week I speak with an expert in some aspect of accountable AI. Scheduled guests for the first season include influential figures from a mix of backgrounds:
Top AI ethics consultants like Reid Blackman (Virtue), Navrina Singh (Credo AI), and Olivia Gambelin (Ethical Intelligence)
Leading executives such as Paula Goldman (Salesforce), Jean-Enno Charton (Merck Group), and Nuala O’Connor (Walmart)
Visionary technology analyst Azeem Azhar (Exponential View)
Leading AI legal advisor Dominique Shelton Leipzig (Mayer Brown)
Key government officials such as Elham Tabassi (US AI Safety Institute), Beth Noveck (State of New Jersey), and Dragos Tudorache (European Parliament)
I’ve already learned a great deal from recording the initial episodes. If you’re interested in the subject matter of this Substack, you’re going to want to subscribe to weekly episodes of the podcast. (And I know you’re busy, so I keep the conversations to roughly 30 minutes!) Listen on Apple, Spotify, or your favor player at: https://link.chtbl.com/accountable.
I would be grateful if you could help spread the word and rate the podcast. And please come back and post your questions, responses, and guest suggestions as comments here.
Looking forward to hearing these interviews on AI accountability
Will learn a lot