Conventional wisdom holds that there are three ways for governments to address powerful emerging technologies—defer, regulate, or control. Each approach supposedly has a champion. The U.S. favors deferring to market forces and mechanisms, which promotes rapid innovation but invites abuses by increasingly powerful platforms. The European Union acts aggressively to protect fundamental rights through comprehensive top-down regulatory codes. And China’s leadership sees the technology sector as but another tool in its efforts to control its own people and expand its global power.
While this framework helps to explain developments over the past fifteen years, it is increasingly misleading today. While Europe, typically positioned as the hero in the narrative, has much to be proud of, its supporters should refrain from taking victory laps. This is particularly true for regulation of AI.
Columbia Law professor Anu Bradford, author of the book Digital Empires, is among the most compelling advocates for the so-called Brussels Effect: the view that Europe’s rights-based regulatory crusade is justifiably setting global standards. She sees the AI debate mirroring earlier controversies over data protection and digital platform power. Specifically, she argues in Foreign Affairs that the U.S. approach to AI regulation involves nothing but optional self-regulation, credulously invoking “an uncompromising faith in markets and reserves a limited role for the government.” As a result, the U.S. finds itself lagging behind as Europe and China fight a global battle over the future of AI.
This may have been an accurate description of the U.S. climate on data protection early in the 21st century. Although even there, the Obama Administration issued a Privacy Bill of Rights in 2012, and pushed for federal legislation, ultimately unsuccessfully. What has happened since then is instructive. As AI arrived as the next all-encompassing challenge for technology policy, American policymakers learned from the digital platform wars and the relentless drumbeat of privacy and content moderation scandals. As did their counterparts in the E.U. and China. However, they learned different lessons.
Europe’s approach was to repeat its successful strategy, epitomized in the General Data Protection Regulation (GDPR) that took effect in 2018, for the AI context. The U.S. and China changed their tack. The China story deserves its own discussion, given the many complexities of the People’s Republic. (A short version is that China, while doing much that is deeply problematic, is not the cartoon villain described in the West.) When it comes to the U.S., the idea that the Biden Administration’s AI policy is just to “run in back” to prior Administrations’ privacy approaches is a grave misunderstanding. What we are seeing today in the U.S. is fundamentally new, thanks to three big shifts.
First, the market context has changed. Where Google, Facebook, and other major digital platforms were once heralded as masters of the universe and geniuses of innovation, they now are increasingly viewed as sclerotic and nefarious. And more broadly, the religion of shareholder value promoting deference to market forces has given way to an era in which everyone wants to believe they are doing good for the world. Those in Silicon Valley who still pay homage to the ethos of “move fast and break things,” such as Elon Musk, endure harsh criticism.
The “uncompromising faith in markets,” and innovation as the paramount goal, are no longer so prominent in American boardrooms. In Washington DC and elsewhere, powerful voices on both the left and right now argue forcefully for aggressive regulation of Big Tech. Those suggesting companies must be free to do as they please, if they are to deliver the benefits of innovation, are now the ones struggling to overcome opposition.
In response, industry is now more aggressively working to develop regimes of ethical and legal responsibility. Most companies with a major AI presence have AI governance processes, with both technical and operational mechanisms systematically implemented. These regimes are imperfect, and when push comes to shove (as when Google fired ethical AI researchers Timnit Gebru and Margaret Mitchell over a paper criticizing large language models), profit still usually wins out over ideals. However, dismissing responsible AI programs as nothing but window dressing or cynical attempts to head off regulation is a mistake.
The second major change was the context for government action in the U.S. In 2012, the Obama Administration still believed it might catalyze comprehensive privacy legislation. A decade later, with increasing polarization in Congress and traditional norms breaking down in American politics, the Biden Administration had no such illusions. The necessity of the political moment forced those developing American AI policy to look to methods other than legislation.
And finally, the substantive context changed. All technologies are not alike when it comes to regulation. What worked (arguably) for data protection will not necessarily work for AI.
The E.U. itself recognizes this. The AI Act does not simply transpose the fundamental rights protections of the GDPR to a new context. It takes an altogether different tack, classifying AI systems into a risk hierarchy, with the most significant regulatory obligations applying only to those deemed “high risk.” There are two reasons for the shift. One is that GDPR is fundamentally about personal data, while the AIA is about machine learning technologies whose distinctive nexus to human rights lies in their outputs. The second a bit of sausage-making. The AIA began as a product safety law, concentrating on risks of AI systems, and only later morphed into an attempt at GDPR 2.0. Regardless of the reasons, AI regulation is not a continuation of data protection regulation by other means.
As a result of all this, the U.S. at the advent of the Biden Administration was positioned and motivated to take serious action on AI. And it was not starting from scratch. AI policy is one of the few areas where the Biden Administration’s efforts in many ways built on its predecessor’s actions. The Trump Administration issued an Executive Order on AI in 2019 leading to a National AI Initiative, and a 2020 Executive Order on AI in government. The Biden Administration chose not to revoke these Trump Executive Orders in issuing their own AI policies.
In contrast to Europe, there is no U.S. AI Act in sight at the start of 2024. Nor is there a Generative AI or Deepfake law in the U.S. comparable to recent Chinese legislation. Yet this doesn’t mean the U.S. is a legal wasteland when it comes to AI. Meta’s settlements with the Departments of Justice and Housing and Urban Development over discriminatory algorithmic housing advertisements not only established ground-breaking precedent in a complex setting, they forced the company not only to change its practices and to invest in developing a novel system for algorithmic debiasing. The Federal Trade Commission’s enforcement actions against Weight Watchers and Rite Aid pioneered the remedy of algorithmic disgorgement, requiring companies to destroy data and algorithms that violated rights. And few companies outside the U.S. in any context file the public adverse impact reports for algorithmic bias in hiring already required under New York City’s Local Law 144.
The reality is that, while AI-specific laws are certainly needed, many existing ones cover AI-based activity just fine. When it comes to the hard work of turning principles and formal rules into actions that successfully influence the way companies act in developing and implementing AI-based systems, the U.S. has nothing to be ashamed of. In some ways, it is ahead of the game.
In Part II, I will address the substance of U.S. efforts toward accountable AI. Spoiler: In important ways, the U.S. approach may be superior to the current direction in Europe and China.