Hi folks. I know I haven’t posted on this Substack for a while. What can I say, I’ve been busy. Mainly on work related to Accountable AI! My podcast, The Road to Accountable AI just wrapped up Season 1, with sixteen episodes featuring a range of corporate, government, and academic experts on AI governance. And the Wharton online executive education program I direct, Strategies for Accountable AI, will launch its first cohort in October. (For those interested in certificate-based training with some of the world’s top researchers on AI in business, there’s still time to register!)
It feels like AI has just arrived as the technological revolution du jour. And already we’re moving into the second era of AI regulation.
One of the defining features of modern information technology is acceleration. Everything seems to occur faster this time. This trend can be over-generalized, but it captures an import truth. Adoption rates of major technologies are compressing, as yesterday’s innovations become today’s established platforms. Adoption of the telephone required running wires to everyone’s home, but adoption of the internet just required a home device (a modem) at the end of those wires, or later, a consumer device (a mobile phone) you could carry in your pocket. E-commerce had the internet to build on, and social media had the Web 2.0 internet of broadband and mobile. All the way to the current adoption champion, generative AI, with ChatGPT going from zero to 100 million users in two weeks.
The increasing speed of technological change is frequently offered as an argument against regulation. The claim goes something like this: Tech changes fast, and is getting faster; law changes slowly, and is getting slower as governments become more bureaucratic, polarized, and sclerotic. As a result, old laws force the square pegs of new technologies into round holes, while legal gaps and ambiguities force them into black holes of uncertainty. Both are harmful to innovation.
There are many flaws to this argument, whether expressed casually as an aphorism that law cannot keep up with technology, or dressed up in scholarship as the “pacing problem.” The biggest is that while new technologies require change for innovation, old laws can address new circumstances through interpretation. The current Supreme Court is, to my mind, deeply misguided, but not because it’s impossible to answer how the First Amendment, adopted in the 18th century, can be applied to social media. Sometimes new legislation or regulation is required, but the speed of that change need not match the speed of technological progress.
In the case of AI, there have been serious efforts to map the need for new laws for at least six years, since the European Union convened its High-Level Experts Group on AI regulation. Major initiatives saw the light of day in 2022 and 2023, with the passage of AI-specific legislation in China, non-regulatory action in the U.S. leading to the October 2023 issuance of Executive Order 14110, the UK’s “Pro-Innovation” vision, and the passage of the AI Act in Europe. Many other nations have issued guidelines and statements of principles, some of them, such as Singapore’s, quite sophisticated. However, those four jurisdictions are the most influential due to a combination of economic power, local markets, and importance in AI development.
At the start of 2024, the standard story was that the EU was leading the regulatory race through adoption of comprehensive rules; China was focused on promoting political control through, and of, AI systems; and the US and UK were some combination of policy laggards and laissez-faire realms with few protections.
European Union
This was always a flawed narrative. What we’re seeing in mid-2024, though, are shifts in all four jurisdictions. The AI regulatory landscape a year from now could look quite different.
In Europe, policy-makers took a victory lap with the final adoption of the AI Act. But that was just the end of the beginning. Now come the essential tasks of staffing up the EU AI Office, developing standards and codes of practice, and determining what effective compliance looks like. On all fronts, there is great uncertainty. Especially when, despite the AI Act’s structure as a uniform regulation, certain key decisions remain delegated to national regulators, who may choose different approaches.
Perhaps even more significant, companies have begun to vote with their feet. Both Apple and Meta have announced they won’t implement some of their most advanced AI features in Europe, at least until there is more clarity about compliance requirements. This predictably led European policy-makers to fume that the U.S. Big Tech platforms were continuing their history of bad behavior.
What’s happening is that companies are calling Europe’s bluff. The bet was that no one could afford to ignore Europe’s massive market, so companies would have to go alone with the EU’s rules. Apple and Meta aren’t playing along. While they can afford to do so because they have so few major competitors, none of them based in Europe, avoiding a market (or limiting your offerings) because of what you consider unacceptable regulation is a legitimate choice. It’s what all the major U.S.-based platforms have done in China.
The other development in Europe is that, unsurprisingly, the rushed effort to bolt on rules for generative AI foundation models late in the development of the AI Act failed to address important hard questions. Given Europe’s pro-regulatory bent, this situation seems unlikely to stand. There will have to be an “AI Act 2.0,” and it will take place in a different political environment, with the European Parliament shifted decidedly to the right.
United States
There are two important development, or possible developments, in the U.S. on AI policy: activity in the states, and the 2024 election.
Hundreds of AI laws have been proposed in state legislatures, and several have already been adopted. Most have narrow requirements, such as disclosure of deepfakes, but Colorado’s has a broad set of obligations to protect against algorithmic discrimination. California’s SB 1047, which would impose significant new obligations on frontier model developers, has a good chance of adoption, despite furious opposition from most of the tech and AI companies. The more time passes without significant federal AI legislation, the more the U.S. environment for AI regulation will be defined by lowest common denominators of state regulation, at least from the largest states.
The upcoming Presidential election could herald a dramatic shift on AI policy if, as are currently the odds, Donald Trump returns to the White House. The Republican platform calls for repeal of the Biden Administration’s EO 14110, an Executive Order providing detailed guidance to federal agencies for establishing AI governance regimes. Perhaps that will be limited to the one set of direct obligations, requiring high-powered foundation model developers to provide disclosures to the government. What if anything replaces it will be a big question.
The tech CEOs and venture capitalists such as Elon Musk and Marc Andreessen who recently pivoted to supporting Trump speak about excessive technology regulation in the Biden Administration. But other than launching investigations of OpenAI and seeking comment on topics such as open-source foundation models and copyright law for generative AI, it’s hard to identify where the U.S. federal government has acted to slow down its AI industry. On the flip side, more explicit industrial policy of subsidizing domestic AI developers, especially those with a national security bent, could be on the table, especially given long-standing Democratic support for increased tech R&D funding.
We could see new directions from a Republican administration and Congress when it comes to AI policy. Preempting state laws would become a major legislative debate in Congress, especially if the California legislation passes. At the same time, the “new right” wing of the Republican party that Vice Presidential nominee J.D. Vance hails from wants to crack down more aggressively on the concentrated power of big tech companies. Senator Josh Hawley has proposed an AI framework based on licensing of powerful foundation models and private rights of action against AI developers,
There will be new developments if the Democrats win as well. Especially if President Biden does not head the ticket. AI and privacy legislation in some form could well come out of the Congress in 2025 or 2026. While an EU-style comprehensive regime is not on the table, a U.S. approach to AI policy based at least in part on new legislation will significantly change the landscape from recent years.
China
China passed a law on deepfakes in 2022, and one on generative AI in 2023. Unsurprisingly, the Chinese government is acting aggressively to control content created with these systems. It’s already doing required testing of consumer-facing LLMs to ensure they protect “core socialist values” in order to stay on the market. There has even been at least one arrest of a Chinese citizen for using ChatGPT to generate fake news about a train crash.
On the other hand, the Chinese leadership recognizes that AI is a strategic technology. China doesn’t want to fall behind the U.S. and other Western countries because its regulatory approach to generative AI is too strict. The generative AI law was already revised from its initial proposal to scale back certain onerous requirements. And China is moving relatively slowly with development of a comprehensive AI law, starting with a discussion draft in May developed by scholars.
How much U.S. export restrictions on advanced chips and other technologies will retard Chinese AI development is a key question. As is the extent to which the U.S. imposes further restrictions, such as limiting distribution of open weights for powerful models. A Republican victory in November’s Presidential election would likely mean a more confrontational approach to China. All of that will impact development of the Chinese domestic AI industry, and the development of Chinese domestic AI legislation.
United Kingdom
The UK’s approach to AI regulation in recent years was a significant Brexit event. The Conservative government consciously rejected the EU’s path to comprehensive regulation, in contrast to data protection where Britain retains the major elements of Europe’s General Data Protection Regulation. The UK has a home-grown AI powerhouse, DeepMind, even though it was long-ago acquired by Google. With its startup ecosystem and research universities, Britain is positioned to be a player in AI development, not just AI regulation. The Tories explicitly took a line of contrasting their “pro-innovation” approach with the pro-regulatory regime across the Channel, hoping to attract talent and capital.
The jury is still out. But so is the Conservative government of Rishi Sunak. The new Labour government can be expected to be more favorable to regulation. Is has stated that it plans “binding rules” on major foundation model developers, but little beyond that. While the government proposed a slew of bills in the King’s Speech announcement of its program, its AI agenda remained vague. It seems likely that the UK will adopt significant AI legislation before the U.S., but not on the same trajectory as the EU. The British administrative agency with the most expertise on AI policy is the Competition and Markets Authority, the competition regulator. That might put a different spin on the regulatory activity that eventually emerges compared to other countries, where the AI debate has been shaped either by AI safety or human rights concerns.
Where We Go From Here
There remain significant uncertainties in all the listed jurisdictions. It is clear that AI the AI regulatory landscape a year from now will look somewhat different, but not obvious which direction it will go.
The challenge for companies is that there are already laws and rules on the books requiring compliance activity. They need to follow AI Policy 1.0, even as AI Policy 2.0 emerges. And while all this change is happening in the regulatory world. things are developing in the world of AI development and deployment as well. If foundation model performance continues to improve substantially, especially if the biggest models continue to be the best ones for most tasks, it will push toward more regulation around AI safety, intellectual property, and market competition. On the other hand, if companies consistently find that, for all the excitement, they still struggle to find real business use cases, it might reduce some of the pressure to get regulatory guardrails in place quickly.
The project of building global AI regulation remains at a relatively early stage. And major countries are still looking to carve out their own unique path. That makes for a more complicated environment, but it also means we will be able to follow out experiments about which legal approaches turn out to be more effective.