The AI Power Paradox... Can States Learn to Govern Artificial Intelligence—Before It’s Too Late?

lightbright

Master Pussy Poster
BGOL Investor
1_Bremmer%20Sulayman_Foreign%20Affairs.jpg.webp


It’s 2035, and artificial intelligence is everywhere. AI systems run hospitals, operate airlines, and battle each other in the courtroom. Productivity has spiked to unprecedented levels, and countless previously unimaginable businesses have scaled at blistering speed, generating immense advances in well-being. New products, cures, and innovations hit the market daily, as science and technology kick into overdrive. And yet the world is growing both more unpredictable and more fragile, as terrorists find new ways to menace societies with intelligent, evolving cyberweapons and white-collar workers lose their jobs en masse.

Just a year ago, that scenario would have seemed purely fictional; today, it seems nearly inevitable. Generative AI systems can already write more clearly and persuasively than most humans and can produce original images, art, and even computer code based on simple language prompts. And generative AI is only the tip of the iceberg. Its arrival marks a Big Bang moment, the beginning of a world-changing technological revolution that will remake politics, economies, and societies.

Like past technological waves, AI will pair extraordinary growth and opportunity with immense disruption and risk. But unlike previous waves, it will also initiate a seismic shift in the structure and balance of global power as it threatens the status of nation-states as the world’s primary geopolitical actors. Whether they admit it or not, AI’s creators are themselves geopolitical actors, and their sovereignty over AI further entrenches the emerging “technopolar” order—one in which technology companies wield the kind of power in their domains once reserved for nation-states. For the past decade, big technology firms have effectively become independent, sovereign actors in the digital realms they have created. AI accelerates this trend and extends it far beyond the digital world. The technology’s complexity and the speed of its advancement will make it almost impossible for governments to make relevant rules at a reasonable pace. If governments do not catch up soon, it is possible they never will.

Thankfully, policymakers around the world have begun to wake up to the challenges posed by AI and wrestle with how to govern it. In May 2023, the G-7 launched the “Hiroshima AI process,” a forum devoted to harmonizing AI governance. In June, the European Parliament passed a draft of the EU’s AI Act, the first comprehensive attempt by the European Union to erect safeguards around the AI industry. And in July, UN Secretary-General Antonio Guterres called for the establishment of a global AI regulatory watchdog. Meanwhile, in the United States, politicians on both sides of the aisle are calling for regulatory action. But many agree with Ted Cruz, the Republican senator from Texas, who concluded in June that Congress “doesn’t know what the hell it’s doing.”

Unfortunately, too much of the debate about AI governance remains trapped in a dangerous false dilemma: leverage artificial intelligence to expand national power or stifle it to avoid its risks. Even those who accurately diagnose the problem are trying to solve it by shoehorning AI into existing or historical governance frameworks. Yet AI cannot be governed like any previous technology, and it is already shifting traditional notions of geopolitical power.

The challenge is clear: to design a new governance framework fit for this unique technology. If global governance of AI is to become possible, the international system must move past traditional conceptions of sovereignty and welcome technology companies to the table. These actors may not derive legitimacy from a social contract, democracy, or the provision of public goods, but without them, effective AI governance will not stand a chance. This is one example of how the international community will need to rethink basic assumptions about the geopolitical order. But it is not the only one.

A challenge as unusual and pressing as AI demands an original solution. Before policymakers can begin to hash out an appropriate regulatory structure, they will need to agree on basic principles for how to govern AI. For starters, any governance framework will need to be precautionary, agile, inclusive, impermeable, and targeted. Building on these principles, policymakers should create at least three overlapping governance regimes: one for establishing facts and advising governments on the risks posed by AI, one for preventing an all-out arms race between them, and one for managing the disruptive forces of a technology unlike anything the world has seen.

Like it or not, 2035 is coming. Whether it is defined by the positive advances enabled by AI or the negative disruptions caused by it depends on what policymakers do now.

FASTER, HIGHER, STRONGER​

AI is different—different from other technologies and different in its effect on power. It does not just pose policy challenges; its hyper-evolutionary nature also makes solving those challenges progressively harder. That is the AI power paradox.

The pace of progress is staggering. Take Moore’s Law, which has successfully predicted the doubling of computing power every two years. The new wave of AI makes that rate of progress seem quaint. When OpenAI launched its first large language model, known as GPT-1, in 2018, it had 117 million parameters—a measure of the system’s scale and complexity. Five years later, the company’s fourth-generation model, GPT-4, is thought to have over a trillion. The amount of computation used to train the most powerful AI models has increased by a factor of ten every year for the last ten years. Put another way, today’s most advanced AI models—also known as “frontier” models—use five billion times the computing power of cutting-edge models from a decade ago. Processing that once took weeks now happens in seconds. Models that can handle tens of trillions of parameters are coming in the next couple of years. “Brain scale” models with more than 100 trillion parameters—roughly the number of synapses in the human brain—will be viable within five years.

With each new order of magnitude, unexpected capabilities emerge. Few predicted that training on raw text would enable large language models to produce coherent, novel, and even creative sentences. Fewer still expected language models to be able to compose music or solve scientific problems, as some now can. Soon, AI developers will likely succeed in creating systems with self-improving capabilities—a critical juncture in the trajectory of this technology that should give everyone pause.

AI models are also doing more with less. Yesterday’s cutting-edge capabilities are running on smaller, cheaper, and more accessible systems today. Just three years after OpenAI released GPT-3, open-source teams have created models capable of the same level of performance that are less than one-sixtieth of its size—that is, 60 times cheaper to run in production, entirely free, and available to everyone on the Internet. Future large language models will probably follow this efficiency trajectory, becoming available in open-source form just two or three years after leading AI labs spend hundreds of millions of dollars developing them.

As with any software or code, AI algorithms are much easier and cheaper to copy and share (or steal) than physical assets. Proliferation risks are obvious. Meta’s powerful Llama-1 large language model, for instance, leaked to the Internet within days of debuting in March. Although the most powerful models still require sophisticated hardware to work, midrange versions can run on computers that can be rented for a few dollars an hour. Soon, such models will run on smartphones. No technology this powerful has become so accessible, so widely, so quickly.

2_RTS266AZ_0.JPG.webp
Robots preparing food at a hotpot restaurant in Beijing, November 2018

AI also differs from older technologies in that almost all of it can be characterized as “dual use”—having both military and civilian applications. Many systems are inherently general, and indeed, generality is the primary goal of many AI companies. They want their applications to help as many people in as many ways as possible. But the same systems that drive cars can drive tanks. An AI application built to diagnose diseases might be able to create—and weaponize—a new one. The boundaries between the safely civilian and the militarily destructive are inherently blurred, which partly explains why the United States has restricted the export of the most advanced semiconductors to China.

All this plays out on a global field: once released, AI models can and will be everywhere. And it will take just one malign or “breakout” model to wreak havoc. For that reason, regulating AI cannot be done in a patchwork manner. There is little use in regulating AI in some countries if it remains unregulated in others. Because AI can proliferate so easily, its governance can have no gaps.

What is more, the damage AI might do has no obvious cap, even as the incentives to build it (and the benefits of doing so) continue to grow. AI could be used to generate and spread toxic misinformation, eroding social trust and democracy; to surveil, manipulate, and subdue citizens, undermining individual and collective freedom; or to create powerful digital or physical weapons that threaten human lives. AI could also destroy millions of jobs, worsening existing inequalities and creating new ones; entrench discriminatory patterns and distort decision-making by amplifying bad information feedback loops; or spark unintended and uncontrollable military escalations that lead to war.

Nor is the time frame clear for the biggest risks. Online misinformation is an obvious short-term threat, just as autonomous warfare seems plausible in the medium term. Farther out on the horizon lurks the promise of artificial general intelligence, the still uncertain point where AI exceeds human performance at any given task, and the (admittedly speculative) peril that AGI could become self-directed, self-replicating, and self-improving beyond human control. All these dangers need to be factored into governance architecture from the outset.

AI is not the first technology with some of these potent characteristics, but it is the first to combine them all. AI systems are not like cars or airplanes, which are built on hardware amenable to incremental improvements and whose most costly failures come in the form of individual accidents. They are not like chemical or nuclear weapons, which are difficult and expensive to develop and store, let alone secretly share or deploy. As their enormous benefits become self-evident, AI systems will only grow bigger, better, cheaper, and more ubiquitous. They will even become capable of quasi autonomy—able to achieve concrete goals with minimal human oversight—and, potentially, of self-improvement. Any one of these features would challenge traditional governance models; all of them together render these models hopelessly inadequate.

TOO POWERFUL TO PAUSE​

As if that were not enough, by shifting the structure and balance of global power, AI complicates the very political context in which it is governed. AI is not just software development as usual; it is an entirely new means of projecting power. In some cases, it will upend existing authorities; in others, it will entrench them. Moreover, its advancement is being propelled by irresistible incentives: every nation, corporation, and individual will want some version of it.

Within countries, AI will empower those who wield it to surveil, deceive, and even control populations—supercharging the collection and commercial use of personal data in democracies and sharpening the tools of repression authoritarian governments use to subdue their societies. Across countries, AI will be the focus of intense geopolitical competition. Whether for its repressive capabilities, economic potential, or military advantage, AI supremacy will be a strategic objective of every government with the resources to compete. The least imaginative strategies will pump money into homegrown AI champions or attempt to build and control supercomputers and algorithms. More nuanced strategies will foster specific competitive advantages, as France seeks to do by directly supporting AI startups; the United Kingdom, by capitalizing on its world-class universities and venture capital ecosystem; and the EU, by shaping the global conversation on regulation and norms.


CONTINUED:
 
Back
Top