The rapid advancement of AI and its near ubiquity will have underappreciated consequences for consumer-facing regulated businesses—developments the industry may not be ready for. Already, ChatGPT, Claude, and other large language models (LLMs) reduce not only the obvious frictions that prevent consumers from pursuing their rights under the law but also the cognitive barriers to doing so.
And we’re just getting started.
The exercise of consumer protections—for example, sending a dispute letter to a debt collector or pressing a landlord to make repairs—often requires the consumer to use the right language, reference the right statute, and have a modicum of understanding of the underlying law.
However, regulated businesses have traditionally benefited from this information asymmetry versus the consumer, both in practical terms and in the inverse correlation between the consumer’s likelihood of success and their willingness to spend the time necessary to pursue smaller claims. That is, the consumers with the necessary skills and knowledge to pursue a claim are less likely to find the juice worth the squeeze.
AI is changing all of that.
A post on Threads making the rounds has been instructive. The author’s brother-in-law died from a heart attack two months after his medical insurance lapsed, and the hospital presented a bill for $195,000. Using Anthropic’s Claude, the author discovered the hospital had double-billed for master procedures and each component procedure, billed for inpatient services while the decedent was still in the emergency department, and charged many times the Medicare rate for supplies.
Ultimately, the author negotiated the bill down to about $35,000.
The example above is an outsized result for one motivated party, but one can imagine similar uses in situations where line items on a bill are a mix of allowable but negotiable, totally made-up, and mandatory (e.g., auto sales).
There is a second class of AI consumer advocacy use cases that are far more dangerous to unprepared firms.
Think about statutorily guaranteed consumer rights that are (currently) rarely exercised: data subject access right requests, pro se litigation (under statutes with de minimis private rights of action, or in small claims court), and various kinds of requests to financial institutions and credit reporting agencies. With AI assistance, these all have a ‘multiplier effect’—the level of expense and effort required for a company to answer is far higher than the thought involved in a consumer’s offhand “Yo Claude, can you handle this?”
Effectively, AI enables consumers to mount massive distributed denial-of-service (DDoS) attacks against businesses by issuing requests for which the required response often involves manual investigation and validation of details.
This is not to say that most businesses are likely to face coordinated campaigns from consumers, although no doubt one or more such efforts will go viral.
Rather, affected firms will face a trickle and then a flood of requests for which they are fundamentally unprepared. Regulatory relief for this phenomenon is not forthcoming. The statute and regs say what they say, and consumers’ use of an enabling technology to bridge language, comprehension, and legal knowledge gaps is unlikely to be perceived as unfair to regulated firms in a way that would motivate relaxing the requirements through rulemaking or legislation.
Affected firms should be strategizing now to handle this challenge. At a minimum, preparation likely includes:
- Address compliance-related technical debt: Most of the firms we’ve seen with data subject obligations under California or EU law have manual processes for access right requests, and some still use manual processing of opt-outs. That lack of automation is likely driven by awkward data relationships between incompatible systems, legacy code, and other issues.
 - Escalate AI use: LLMs can be used to classify inbound requests relatively easily. Over time, patterns can be identified that automatically identify typical consumer LLM misapprehension of the relevant regulations. Separately, AI development automation may be the only way for some firms to address technical debt in a timely fashion.
 - Get ahead of information asymmetries: Firms that are economically dependent on low-information customers may have to consider different business models. Firms with high asymmetry in part of their selling process (e.g., aftermarket warranties) should consider lower-margin, more sustainable approaches.
 
There are likely to be tailwinds for some investors and firms as well—a growing diligence area where we expect to spend even more time. We are still in relatively early days of the AI transition, and vertical SaaS platforms are uniquely positioned to address this challenge for their clients. The larger constellation of compliance consulting and implementation firms will likely benefit, as will consumer-facing businesses that prioritize compliance relative to their peers.
AI just made every consumer dangerous. The smartest firms will treat this not as a threat, but as a turning point. As AI redraws the map between consumers and corporations, we’ll be paying close attention to the risks and the opportunities.




























