In a previous post, I defined the concept of “Accountable AI,” and explained why it’s the right way to frame efforts to address ethical, legal, and societal questions around AI implementation. I made the case broadly for the value of such initiatives, but that leaves open an important question. If you’re an organization using AI, or considering AI-based solutions, why should you make the effort to engage in Accountable AI practices?
Chances are, if you’re reading this, you already believe AI is a big deal, and it should be implemented ethically and responsibly. Perhaps you need to convince a manager to invest the resources, and accept possible delays and restrictions. Perhaps you want to convince policy-makers to act (or you’re a policymaker yourself). Perhaps you’re convinced the big AI providers or regulators are already solving the problem for you. Even if none of these apply, it’s healthy question the reasons for going down any path.
Fortunately, there are many good answers to the “why” question. Here are five of them. The last four are supported with data; the first is a reason why you shouldn’t wait for data to convince you.
1. It’s the Right Thing to Do
The case for Accountable AI is like the case for any responsible business practices. At the core, organizations and their leaders should want to do the right thing. Do you really want to implement systems that abuse individuals’ personal information? That manipulate them in harmful ways? That reinforce unfair biases against underprivileged groups? Or that may fail to meet your own objectives, for reasons you don’t understand? Sure, there can be disagreement on where to draw lines between profit and purpose, but no one goes to work trying to do evil. We’re far past the point where anyone involved with AI can say credibly, “It’s just math. The consequences don’t concern me.”
2. Risk
There have already been many serious controversies over AI implementations, ranging from racially biased facial recognition systems to lawsuits over potential copyright violations. The law firm Weil Gotshall found that 18% of S&P 500 companies, and 12% of the broader Russell 3000, discussed AI as a risk factor in their 2023 annual filings with the Securities and Exchange Commission in 2023. And it’s not just a future concern. More than a third of respondents to Datarobot’s 2022 State of AI Bias survey said their firm had already had incidents involving AI bias. Of those, 62% lost revenues, 61% lost customers, and 43% lost employees.
3. Trust
Everyone talks about the crisis of trust in today’s world. (Heck, I wrote a book about trust six years ago, in the context of blockchain technology.) AI is a potential accelerant of those trends, thanks to developments like misinformation and deepfakes. But it also faces roadblocks when people are unwilling to trust AI systems. The Pew Research Center in 2023 found that only 10 percent of U.S. adults reported being more excited than concerned about AI, compared to 52% who were more concerned than excited. The trust gap for AI is corrosive itself and has serious downstream impacts, which the next two answers reflect.
4. Business Benefits
If you won’t invest in Accountable AI because of downside risks, perhaps the upside returns will convince you. In the current environment of fractured trust, and particularly the global backlash against technology platforms, fears of AI harms cannot be ignored. Users, regulators, partners, and other stakeholders expect firms to take action, which can put the brakes on deployments. According to a 2023 Algomarketing survey of global marketing leaders, governance and ethical considerations among the biggest concerns about AI adoption for 44% of respondents. Three quarters said their firms had already delayed projects due to ethical concerns. If you want to realize the full benefits of AI, you have to incorporate accountability mechanisms.
5. Regulation
And then there’s regulation. For from being the legal “wild west” that’s often described, AI is one of the most active areas of regulatory development and investigation around the world. The European Union, China, and several U.S. states have already passed AI-specific laws, and a host of regulators are engaging in enforcement actions on algorithmic bias and privacy violations under existing legal authority. Regulation puts compliance obligations on all firms, which most are ill-prepared to meet. For example, in the 2023 KPMG US AI Risk Survey Report, 84% of respondents said they believe independent audits of AI models will be required in within four years. Yet only 18% believe they have the internal expertise to audit their AI systems.
From Why to How
These answers aren’t mutually exclusive. Frankly, everyone should find all of them convincing reasons to ensure their AI systems are as ethical, safe, and legally compliant as possible. Different organizations, however, will weigh concerns differently. Established firms in heavily-regulated industries may have a different risk calculus than government agencies, mission-driven nonprofits, or social media startups. Their capacity to put people and money into Accountable AI programs will similarly vary. Firms developing AI solutions will also think differently than those implementing technology developed elsewhere. And the same organization may have different feelings depending on which AI system we’re talking about.
Still, thinking through what most motivates you is a good way to build a template for adopting and reviewing activities. If regulation is a big motivator, for example, you’ll want to be sure you can meet regulatory requirements, now and upcoming. It will help you evaluate outside vendors. And it will help you communicate inside and outside the organization about your efforts.
Your Turn
What do you find to be the most compelling “why” for Accountable AI?
1. It’s the Right Thing to Do
But making Accountable AI seems nearly as difficult as making... AI