Governmental Hard Power is Finally Coming for AI, and is Already Losing

Governmental Hard Power is Finally Coming for AI, and is Already Losing

November 6, 2023

By JB Ferguson, Head of Capstone’s Technology, Media, and Telecommunications practice

World governments have fully joined the fight against the unconstrained growth of general-purpose AI and its potential negative consequences. They are likely too late.

Political leaders and bosses of top AI firms gathered last week for the UK’s AI Safety Summit to hash out an international agreement on addressing safe and responsible development of the rapidly advancing technology. The two-day summit hosted government officials and companies worldwide, including the US and China. Companies, including Meta, Google DeepMind, and OpenAI, agreed to allow regulators to test their latest AI products before releasing them to the public. Notably, governments coalesced around a common set of explicit concerns, including:

  • Catastrophic harm from Artificial General Intelligence (AGI)
  • Enabling novel chemical, biological, radiological, and nuclear (CBRN) weapons
  • Cybersecurity threats

Their implicit concerns vary from the US’s desire to maintain its dominance in the industry, China’s frustration with being cut off from the most advanced hardware, and the EU’s hope to avoid another GDPR-like scenario wherein American platforms overwhelm (and ultimately ignore) the EU’s ability to regulate them.  In any case, Prime Minister Rishi Sunak’s successful summit was a diplomatic coup. It resulted in 28 nations (though notably not China) signing the Bletchley Declaration, a commitment of the signatories to building a shared understanding of AI safety risks and building risk-based policies with a spirit of cooperation.

Their implicit concerns vary from the US’s desire to maintain its dominance in the industry, China’s frustration with being cut off from the most advanced hardware, and the EU’s hope to avoid another GDPR-like scenario…

Beyond the agreement, the US has taken the most aggressive approach among Western powers. Until now, the focus of policy has been enhancing export controls to prevent China from accessing the hardware necessary for building competing models.

Last week, President Biden’s AI executive order took aim at domestic large language model (LLM) development to limit the horsepower of advanced models more strenuously. It draws a line in the sand, directing the Commerce Department to require any company developing a model above a certain threshold to provide extensive reporting on physical- and cybersecurity measures for training, retention of model weights, and performance in red-team adversarial testing.  In addition, anyone building or possessing a cluster capable of certain levels of computing power is subject to a similar registration regime. 

This was the only policy option available that did not cede regulatory ground to the EU or rely on Congress to pass policy legislation.  Although leveraging the European Data Protection Agency approach is novel, affected companies are unlikely to litigate against a measure that limits any competitor from surpassing them.  There are many other positive things in the order, including National Institute of Standards and Technology standards development and immigration reform for AI talent.

Capstone believes the AI cat is already out of the proverbial bag. While the order may have been the best possible policy maneuver…it does not address many of the specific, tangible, burgeoning threats highlighted as the primary motivation for policy intervention.

But Capstone believes the AI cat is already out of the proverbial bag. While the order may have been the best possible policy maneuver for the Biden administration, it does not address many of the specific, tangible, burgeoning threats highlighted as the primary motivation for policy intervention. There is no shortage of things to stress about when it comes to our AI future, but three of the notable underappreciated concerns are as followed:

Smaller models are getting more powerful and dangerous: Researchers at Microsoft have trained a 1.3B model on code generation that is competitive with much bigger 10B models. A Google team has developed a method of using large LLMs to train smaller specialized models at 1/500th of the larger model’s size. Small specialized models trained by an adverse actor are exactly the kind of AI tool that would increase the risk of CBRN weapons.  Put plainly, it is technically possible for terrorists to develop effective bomb-making AI helpers that would run on a high-end consumer laptop. 

Policy Implications: Commerce will have to consider significantly lowering the compute threshold for regulatory notification. However, this will come at the risk of nipping academic research and open-source development, both significant sources of US AI leadership, in the bud.  Meta, a16z, and others have already begun lobbying on this concern.

What happens when we run out of training data?: Research firm Epoch AI suggests we will run out of high-quality language data in 2026, low-quality data somewhere between 2030 and 2050, and vision data in about the same time frame—developments that threaten the significant lead American industry has.  Hitting the content ceiling will flatten AI’s exponential gains into something linear, allowing China and others to catch up.

Policy Implications: At some point, the internet will become so polluted with AI-generated content that known human material will command a significant premium. To maintain its leadership in AI, the US will have to significantly increase funding for the arts and academic research to expand the quantity of high-quality data. In the long term, this could reverse the decades-long shifts towards adjunct instructors and administrators in the university labor force.

AI’s Increasingly Erratic and Unanticipated Behavior:  Training LLMs to interact with external software (i.e., “agents”) can have unanticipated real-world effects, not unlike leaving a precocious toddler unsupervised.  Developments in training methods, including specialized data sets, will create credible cybersecurity threats driven by models small enough to be trained on-premises with a small cluster of consumer-grade hardware.

Policy Implications: Tech limits can incrementally de-risk AI, but at the end of the day, emergent activities are resolved through some combination of changes in insurance and police work. The growth of cybercrime on the internet has been addressed by a combination of increased law enforcement sophistication and growth in cyber-related insurance products.  We expect the treatment of AI misuse to oscillate between over- and under-reactions.

This is just a smattering of the dynamics we are paying close attention to across all levels of government, sectors, and industries for our corporate and investor clients, along with both the risks and opportunities associated with how policymakers respond. We have no doubt that regulators will try their best to catch up. But they may already have lost too much ground.


Have a question?

We want to hear from you. Let us know your question and a research analyst will get back to you promptly. We love to discuss our research.

Connect

Our Latest Insights

How the Dept. of Government Efficiency Will Use Its Platform

How the Dept. of Government Efficiency Will Use Its Platform

November 25, 2024 By Elena McGovern, co-head of Capstone’s National Security Team, Michael Wang, National Security analyst, and Hunter Hammond, Macro Policy analyst Capstone believes that the greatest near-term impact of the Department of Government Efficiency (DOGE)...