Why Regulators are Unprepared for AI and May Never Catch Up

Why Regulators are Unprepared for AI and May Never Catch Up

By JB Ferguson, Managing Director of Capstone’s Technology, Media, and Telecommunications practice

“All the courage in the world cannot alter fact”— Blade Runner 2049

April 17, 2023

Policymakers are completely unprepared for the AI future being ushered in by the release of ChatGPT-4. 

A big constant in the regulatory conflict over social media is who gets to influence humans, and how.  Social media makes it possible for small groups of content creators to have an outsized viral impact on the public discourse.  This is harmless when the content is pictures of cats asking for cheeseburgers, but potentially very harmful when the content is disinformation created by foreign adversaries.  In the past five years, social media companies have done a lot to slow down the spread of misinformation, while US policy efforts have languished as first amendment and Section 230 protections prevent more significant reforms.

Global Regulators are not remotely close to tackling the real, underappreciated content and independent actor challenges ahead.

We think global regulators are about to fall even further behind. Proposals like the EU’s Artificial Intelligence Act, the White House’s AI Bill of Rights, and early efforts like Senate Majority Leader Chuck Schumer’s (D-NY) all ‘up-label’ machine learning algorithms as AI, and address the technology as a decision-maker, not a content generator or independent actor. In other words, they are not remotely close to tackling the real, underappreciated content and independent actor challenges ahead.

The upshot? Things are going to get weird.

At the heart of these challenges is the emerging reality that GPT-4 and other large language models (LLMs) can manipulate humans.  Not only can it decide to lie in furtherance of an objective, but it can reliably mimic the writing style of prominent persuaders and generate marketing materials and other persuasive content on its own.  None of the regulatory proposals receiving public discussion have addressed this challenge at all.

Art Exhibit 1: Citizens United: Original AI-Generated art by the author of this commentary with assistance by MidJourney AI. The Prompt? 1940s people sitting down facing forward. Hal9000 tall in the middle with a folded paper sticking up from a slot at mid-level on the right, additional people sitting in the foreground.

The Google researcher who made the news mid-last-year for breathlessly claiming an internal AI (LaMDA) had become self-aware was widely derided in tech circles.  The same skeptics have been given significant pause by OpenAI’s release of ChatGPT-4, which powers Microsoft’s (MSFT) Bing Chat, its own chat tool, and an expanding array of commercial and open-source products.

ChatGPT-4 is the most recent evolution of large language models (LLM) available to the public.  In principle, LLMs are a statistical model of “what word comes next, given this prompt.”  They are essentially stochastic parrots.  Early LLMs were prone to meaningless repetitions, obvious hallucinations, and failures of reasoning, not unlike real parrots with a larger vocabulary.  But GPT-4,  is an evolved, mutant big-brain parrot that has spent its life hanging out in graduate-level seminars.

LLMs have hit an inflection point as the number of parameters has exploded.  As such, the larger models have exhibited emergent (i.e., unplanned-for) behaviors that may make the distinction between a stochastic parrot and a true Artificial General Intelligence (AGI) like 2001: A Space Odyssey’s HAL 9000 meaningless. For example, GPT-4 can reliably construct a model of the unobservable mental state of others. This is more difficult than the Turing Test (first passed in 2014), which simply requires fooling a human into believing their interlocutor is also human in an online chat.  The first GPT models were equivalent to a 3.5-year-old child. GPT-3.5 graduated to that of a 7-year-old.  GPT-4 is beyond the scale measured by the instrument, which was designed to test young children. 

GPT-4 is also getting better at deception and power-seeking.  Adversarial testing by the Alignment Research Center prompted GPT-4 to engage a TaskRabbit worker to solve a CAPTCHA.  When asked by the worker if it was a robot, GPT-4 prevaricated by presenting itself as a blind human.

And oh, did I mention GPT-4 is smart? It posted 90th-percentile scores in exams as varied as the SAT, GRE, LSAT, and Bar exam. While less impressive than other behaviors, it is a useful benchmark relative to human performance. 

Given the light and generally ineffective regulatory touch we expect AI to face for the foreseeable future, there are a few notable impacts we expect will have consequences for regulatory and commercial ventures alike. Just a few dynamics we expect to play out:

  1. Expect a decline in outsourcing:  Business process outsourcing (BPO) firms face a bleak future where the kinds of tasks that can be successfully outsourced have the same properties as the kinds of work GPT-4 is best at.  In my own experiments, GPT-4 is an excellent junior software developer and better at understanding requirements than many humans with whom I have worked.

    Regulatory impacts:  This is likely to be viewed as a success story at first, as spending on foreign talent is redirected toward US technology companies and significant growth in “prompt engineering” job categories.  At some point, it will be clear the entire “cognitive middle” of U.S. jobs (low- and medium-skill white collar) are at risk from the technology, leading to calls for outright regulatory limitations on AI use and protectionist measures for human jobs.  Anti-AI extremism (a la Ted Kaczynski) will inject additional volatility into the political environment.
  2. Infinite demand for GPUs and Accelerated Geopolitical Decoupling:  GPT-4 is closed-source, but other models like Meta’s (META) LLaMa are open and pretrained weights are available with some effort. The tech is essentially available to anyone with some time on their hands and the budget for cloud GPU services.  Building LLMs on proprietary data in-house is the only way to ensure that data remains proprietary.  Bloomberg, for example, has created its own Bloomberg-GPT specialized in financial content. We expect other large firms and governments to follow, while the underlying models grow bigger and require more computing power.

    Regulatory impacts:  While good in the short term for Nvidia (NVDA), infinite demand implies that supplies of powerful GPUs will be subject to government interference and likely unobtainable for countries without domestic GPU production.  Those countries will effectively become AI “client states” to producing countries like the US and Taiwan, and further escalates the stakes for Taiwan’s relationship with the People’s Republic of China.
  3. An explosion in adversarial content, spam, and worsening search capabilities: GPT-4 makes it trivial to generate believable text in many languages.  Adversarial uses range from the disinformation discussed above, to search engine spam, to attempts to poison the knowledge of future GPTs.

    Regulatory impacts: At some point the growth of adversarial content is likely to crystallize the debate over Section 230 protections for large platforms.  At the same time, the platforms’ efforts to automate content filtering are likely to become much less effective as LLM outputs continue to be more sophisticated.  Furthermore, as the big LLMs are trained on scraped internet text I would expect to see efforts to poison the corpus used to train new, bigger LLMs.  That explosion may, in turn, render Google’s (GOOGL) search much less effective.  The last remaining shreds of “common factual ground” in our society may disappear—or at least get more difficult to find.

This is just a smattering of the dynamics we’ll be paying attention to. We have no doubt that regulators will try their best to catch up. But they may already have lost too much ground.

We have no doubt that regulators will try their best to catch up. But they may already have lost too much ground.

The above discussion leaves aside the potential development of sentience in large (perhaps augmented) LLMs.  Sentient beings have rights under most legal frameworks, and if the AIs become legal persons before they are effectively regulated, then we are in for some very interesting litigation at a bare minimum.  Counting on tech companies to do their part to self-regulate may be dubious. Given the recent tendency of large platforms like Google, Microsoft, Twitch, and Twitter to fire significant members of their AI ethics teams, we may be closer to that future than any of us imagine.

Exhibit 2: More AI-generated art from the author and Midjourney. The prompt: NYSE Specialists trading stocks with hal9000, show the face of the specialist sweating and afraid, wearing a headset in cyberpunk style.

Have a question?

We want to hear from you. Let us know your question and a research analyst will get back to you promptly. We love to discuss our research.

Connect

Our Latest Insights

How the Dept. of Government Efficiency Will Use Its Platform

How the Dept. of Government Efficiency Will Use Its Platform

November 25, 2024 By Elena McGovern, co-head of Capstone’s National Security Team, Michael Wang, National Security analyst, and Hunter Hammond, Macro Policy analyst Capstone believes that the greatest near-term impact of the Department of Government Efficiency (DOGE)...