There is an exercise I always do with my MBA students at Wharton, when teaching the required course on Responsibility in Business. I ask them to define the key word in the title, “responsibility.” It’s a simple enough term. Yet as we talk through the suggestions on the resulting word cloud, conceptual depth and confusion quickly emerge. There is more to responsibility than ethics and following the law. “Responsibility means…well, being responsible!” someone usually blurts out.
The word we always come around to is accountability: the feeling (and the reality) that you have obligations to others, that your fulfilment of those obligations can be evaluated, and that failure to live up to them may have consequences.
There are many manifestations of accountability. It may be the “little voice in your head” telling you what is right; the friend or supervisor who gives you guidance; the rules and practices governing your behavior; or legal and regulatory sanctions. Without it, responsibility collapses to principles, which have no weight behind them. Or it hardens into compliance, which is too rigid for the challenges of reality. Accountability has the further virtue of emphasizing measurement, standards, and evaluation, the core of the adjacent concept of accounting.
Accountability is relevant in many contexts, business or otherwise. I find it especially powerful today in thinking about the many challenges posed by the rise of artificial intelligence. What AI needs is accountability.
Almost as long as there has been public awareness of the power of AI, there has been discussion about its risks and limitations. Even earlier, pioneering computer scientists such as Joseph Weizenbaum in the 1970s and thinkers such as Batya Friedman and Helen Nissenbaum in the 1990s, called upon those building algorithmic decision-making systems to consider what could, and would, go wrong. With the growth of AI adoption based on deep learning techniques in the early 2010s; Cambridge Analytica and other scandals involving dominant digital platforms; and distressing examples highlighted by researchers such as Joy Buolamwini and Timnit Gebru, Safiya Noble, Ruha Benjamin, and the data journalists at Propublica, these questions took on greater salience. More experts began to study the issues, gradually coalescing into a field.
But what to call that field? The two leading academic conferences chose strings of words: Fairness, Accountability and Transparency (FAccT) and AI Ethics & Society (AIES). As practitioners joined the conversation, they split into two tribes: Responsible AI (RAI) or AI governance for those focused on existing problems, and AI Safety for those fearing future risks as AI developed. The European Union, launching its AI regulatory effort in 2018, latched onto Trustworthy AI as a unifying concept. All these phrases, and others such as human-centered or risk management, have value. There will never be one single term that either fully captures the range of topics or achieves universal adoption. Yet still, what should be our lodestar?
I believe it should be accountability.
Accountability is a reflection of AI’s maturation. We are well past the point where AI is primarily of interest to academics and computer scientists. Or to cutting-edge technology firms. Hundreds of millions of people are already trying out generative AI tools on their own, and billions are subject to AI-based decision-making by both companies and governments. The problems we should be concerned about are now fairly well mapped out. And while there remain huge open research areas, both technical and practical tools now exist to address most of them. The most important question for AI developers and deployers is therefore not what we hope to accomplish, but how.
Go back to my initial anecdote about drilling down on the meaning of responsibility. Accountability is the bridge between why and how. It ingests values such as trustworthiness and fairness; takes account of obligations of morality, policy, and law; and pushes us to consider the most effective steps to achieve our goals. That is, I would argue, what AI most needs today. Not the only thing it needs: We must have ethicists thinking deeply about normative values expressed in our sociotechnical systems; computer scientists pushing the frontiers of debiasing, interpretability, and alignment; and advocates engaging with policy-makers to design legal and regulatory changes. However, for those focused on where the rubber meets the road, the practices and processes that made a difference, accountability should be central.
Accountability leaves room for contestation among the goals and values, as well as evolution of the obligations. It’s not a solution in itself. A company can easily describe itself as accountable with no real commitment, just as it can claim to be ethical, responsible, and trustworthy. Many have, whether we’re talking about AI or business behavior more generally. Yet aspiring for accountability puts constraints, tradeoffs, and assessments front and center, in ways that can be translated into actions. What is a corporation, after all, but a structure of accountability?
Accountable AI captures what people really mean when they say AI today is “unregulated” or a “wild west.” There are many laws and regulations applicable to AI, even in countries such as the United States without explicit AI legislation. (Just ask RiteAid, which the FTC slapped with a five-year ban and requirement for data destruction for its poor use of facial recognition to catch shoplifters.) And there are many companies adopting meaningful internal processes and contractual arrangements to mitigate AI harms without express legal obligations; every good example of ethical business behavior does not arise down the barrel of a regulatory gun. We do need fit-for-purpose AI legislation, but that will only be one piece of the solution. Accountability manifests in many ways, ranging from individual values and communal norms to privately-defined rules and publicly-enforced law.
Words matter. The term artificial intelligence is itself a choice. It took hold at a famous series of Dartmouth summer seminars in 1956, but that was not predetermined. Even now, the systems widely described as AI may also be called machine learning, data science, algorithmic systems, models, analytics, or automated decision-making, with different connotations. The terms we use influence both the problems we identify and the solutions we gravitate toward. A focus on accountable AI will push us in the right directions.
Beautiful article, touching upon history, evolution and the direction towards AI in our lives. I could picture the accountability you are mentioning as 2 things, an umbrella to protect from negative outcomes, and a compass to drive us towards efficient systems.
You can be "responsible" to your own code, your moral compass, to your duty to yourself. "Accountability" requires others, invoking defined social graphs and social compacts.