7 Comments

Beautiful article, touching upon history, evolution and the direction towards AI in our lives. I could picture the accountability you are mentioning as 2 things, an umbrella to protect from negative outcomes, and a compass to drive us towards efficient systems.

Expand full comment

You can be "responsible" to your own code, your moral compass, to your duty to yourself. "Accountability" requires others, invoking defined social graphs and social compacts.

Expand full comment

Great point, Phil. Personal responsibility is important, but we need networks of communal relationships and feedback to address the issues AI raises.

Expand full comment

This makes a lot of sense, and hopefully, this is a train of thought that gains traction as AI continues to develop quickly.

Expand full comment

This is so important, thank you for writing this newsletter. I will be reading along, weaving your knowledge into my web of writing and speaking about intuition, or InnSæi - 'the sea within' - Icelandic for intuition, and why we need a revolution in how we think and show up, focus on moral acts of attention and putting our spirit into matter more, so that our world can become more humane and sustainable.

Expand full comment

Disagree with the categorization of AI Safety as being focused only on future harms of AI. It is a fairly well-establish subfield of Responsible AI and governance, focused on issues including monitoring and observability, model alignment, red teaming, testing and evaluation, and much more.

Expand full comment

The term AI Safety has been co-opted by the existential risk community, which has focused great attention on "frontier models" and potential harms. You're right that it's not only used that way -- for example, the UK's AI Safety Summit was much broader. Just the connotation of the word, though, emphasizes certain kinds of dangers. It's a stretch to talk about algorithmic bias or unlicensed data scraping as "safety" issues.

Expand full comment