Is the US Really Behind on AI Policy? Part II
There is more than one way to regulate emerging tech
In Part I, I made the case that the U.S. is not, as conventional wisdom suggests, far behind Europe and China in the critical task of developing regulatory and policy frameworks for AI. Even if true, though, that doesn’t address a second question: Is the U.S. taking the right approach?
An important point I made in Part I is that one should take care in generalizing about something as multi-faceted as national approaches to AI. Even a single document such as the Biden Administration’s October 2023 Executive Order on AI runs to dozens of pages of mandates on an array of topics. And that’s just one pronouncement. So any assessment of U.S. AI policy needs to start with, “it’s complicated.” That said, some patterns and decisions can be identified.
A big one is whether to view AI regulation as fundamentally a project of its own, or mainly as applying the levers of government to a novel development, updating legal authority only where needed. With Congress deeply polarized and stuck, the U.S. does not have the option of leading with a comprehensive new AI legal regime. It’s not likely it would anyway. The Biden EO provides nearly 100 directives to Federal agencies and departments, because AI touches on the existing activities that so many of them are already engaged in. Many were already active in AI policy development.
The diffuse approach makes more sense for AI than it did for data protection. Personal data is personal data, and while there are distinctive issues in data collected in sectors like healthcare and banking, relying on narrow sectoral legislation, with the singular backstop of the Federal Trade Commission’s authority to police “unfair and deceptive” practices, is a poor fit. It is bound to leave huge gaps, such as the massive data broker industry, and to leave too much activity in the default realm of contractual notice and consent processes where users have no power or even awareness of what’s happening.
For AI, by contrast, the questions around safety of autonomous medical devices are quite different than those for deepfake advertisements or algorithmic mortgage evaluations. Domain expertise and targeted legal authority are important, as long as regulators have the requisite knowledge and resources to take on something as novel as AI. That’s why the Biden EO puts so much effort into skilling up agency capabilities. That will be a tall order, and Congress holds the purse strings on the crucially important resources to support these activities. But we’ve already seen examples, like the Meta housing advertisements and Rite Aid facial recognition cases, where serious enforcement is occurring.
That leads to the next reason to be optimistic about the U.S. approach to AI. One of the biggest challenges of regulating technological systems is the Whack-a-Mole problem. If a regulator pounds on one kind of activity, companies can often repackage themselves outside the formal scope of that activity.
The issue is not limited to technology or AI. In financial regulation, one of the biggest issues in recent decades is the growth of shadow banking. Shadow banks are various entities that function like banks — they take in customers’ funds, provide them with services, and use the money to invest. But they aren’t banks, at least according to the legal definition of the term. Banks are heavily regulated when they take customer deposits in savings accounts. But money market funds, for example, are regulated differently in the U.S., despite providing comparable services. Hedge funds and similar vehicles are not as easily accessible by retail customers, but they aggregate similar amounts of capital, which they deploy in similar or more risky ways, to large traditional banks.
The Global Financial Crisis of 2008 shows just how catastrophic the results can be when functionally similar sources of risk are under-regulated because of artificial line-drawing. In whack-a-mole situations, the problem is less about the content of regulation than its boundaries.
In the AI context, the analogous problem is defining what counts as “AI” to subject to regulation. This problem reared its ugly head in Europe when, four years into the development process for the AI Act, and well after a comprehensive legislative draft was issued, OpenAI fired a shot heard around the world with the release of ChatGPT. For all its scope, the original draft of the AIA did not cover generative AI in any direct way. Drafters had to rush to create new provisions for foundation models, which became a huge source of disagreement in the final stages of negotiations.
The EU subsequently updated the definition of AI in the legislation, in concert with the Organization for Economic Cooperation and Development, to a “future-proof” version that covered generative systems. How confident should one be that AI technology in the coming years will not advance in unexpected ways that push on the limits of that definition?
And then there is the core of the case against the American approach: the reliance on what is derogatorily labeled as self-regulation. How can one defend relying on optional invitations for AI foxes to effectively guard the human henhouse?
“Hard law” matters, but there are also significant benefits to exploring “soft law” methods to achieve policy goals for AI. The central development in administrative law in recent decades is the turn to governance. Scholars recognized that the picture of regulation as being solely about what governments do or don’t mandate is over-simplistic. There is a much broader toolkit to draw on, involving a complex dance of regulators, regulated firms, and stakeholders. The shoves backed by force of law matter, but so do the pushes that aren’t…and the nudges, and the convenings, and so forth. And sometimes beating blood out of a stone isn’t the best approach, even backed by the awesome powers of governments. Most European policymakers acknowledge today that despite massive fines, huge implementation investments, and years of effort, they largely failed to change the fundamental business practices of Google and Meta around personal data.
The Biden EO acknowledges and promotes several forms of soft law. One is standards development. The EO relies heavily on the National Institute of Standards and Technology, an arm of the Commerce Department, which words to create and promote technology standards. NIST issued a widely-praised AI Risk Management Framework in early 2023 that gives organizations step-by-step guidelines and tools to implement comprehensive accountable AI initiatives. Under the EU, NIST will step up efforts to coordinate standards for things such as AI fairness assessments and promote “red teaming” to test AI systems, though a new U.S. AI Safety Institute.
A second is leveraging the pull of procurement. One thing the government retains the capacity do, even in the U.S., is issue obligations on itself. Federal agencies are, collectively, massive purchasers of everything. That includes AI. State and local agencies, and often non-profit and educational ones, will often follow the practices of the federal government. If those buyers impose requirements on AI firms regarding fairness, transparency, testing, and so on, they will pull the rest of the market. Even firms that don’t particularly want to copy those steps for private customers may do so to avoid creating duplicate processes. Ironically, this is the same dynamic behind the Brussels Effect. Once multinational firms did the work of GDPR compliance for European markets, there was no good reason to create separate systems with lighter protections for those in the U.S. and elsewhere.
These are just a few of the soft law mechanisms the U.S. is drawing on. Cary Coglianese, my colleague at Penn Carey Law School and director of the Penn Program on Regulation, has a nice post on Regulatory Review describing the EO as a “People-and-Processes” initiative. It focuses on AI regulation as a management challenge for both regulators and regulated companies.
With Congress immobilized, the U.S. is also taking full advantage of federalism as a means of regulatory experimentation. States and localities are not waiting for the national government to act. New York State’s Policy on Acceptable Use of Artificial Intelligence Technologies, for example, imposed a comparable spectrum of requirements on government agencies and their contractors to what the EU AIA Act will mandate more broadly. It did so immediately upon issuance in January 2024, two years before the “first in the world” AI Act becomes effective.
Even if these obligations don’t serve as a template for the private sector, they will provide useful data on effectiveness. Other states and localities will follow their own paths, creating a richer foundation of options whenever Congress is ready to move. And if mighty California chooses to act aggressively, its “Sacramento Effect” may have the same impact on practices across the U.S. as Brussels does globally. We’ve already seen that with data protection and network neutrality rules.
All of this won’t work if companies refuse to participate or comply. Or if the soft law proves so soft as to be meaningless. A group of researchers, for example, found that few companies are filing the adverse impact reports on algorithmic hiring mandated under New York City’s Local Law 144, because the law leaves them so much discretion on whether their actions are covered. The private sector has many legal and political techniques at its disposal to resist real accountability for AI. However, that’s true to a substantial extent with “hard law” approaches as well. And as described in Part I, there are many reasons why AI firms want to take regulation seriously, and are even calling for it themselves. Trustworthiness isn’t just something governments care about.
I’ve only discussed some elements of the U.S. and European AI regulatory push in this post, and haven’t talked about China at all. All three jurisdictions will need to learn from one another, from other countries active in AI regulation such as the UK, Singapore, and Japan, and from the real-world performance of AI policies, to succeed.
We’re still early in the AI race, and the AI policy debate. Ultimately every country wants effective regulation and governance for these world-changing technologies, regardless of where the ideas originated. At this point in time, though, many of the best ideas are coming from the supposedly unregulated wasteland of America.