There is a “haha, only serious” meme that humans have only a few years to acquire sufficient capital to participate in an AI-driven future, where returns on labor go to zero, and avoid becoming part of the “permanent underclass.” In an ironic twist of fate, it is the explosion of AI agents that might be affected first. We believe that development is an early sign that rising costs and tighter access will reshape who can capture value from AI across the ecosystem and industry.
State-of-the-art (SOTA) inference models, highly optimized for performance, are becoming more expensive due to crowding-out effects from new models, rising US Government demand, and potential restrictions on availability to foreign customers through export controls and know-your-customer (KYC) rules.
We can already see this in the deployment of Anthropic’s Claude Opus 4.7 on April 16th, 2026. The release was preceded by dramatic disruptions in the usage accounting for customers on fixed-price contracts (Enterprise, and consumer Claude Pro (Max)). Workflows, including development and deep research tasks that had been previously unremarkable, were hitting session limits in 3-4 prompt turns. Anthropic has now banned the use of personal agents (e.g., Clawdbot, Hermes-Agent) with their fixed price contracts; the bill for metered usage of personal agents can run into the low hundreds of dollars per day.
Opus 4.7’s release was on top of the leak and subsequent confirmation of Anthropic’s ‘Mythos’, a model with step-change improvement on most benchmarks and capable of autonomous cybersecurity research. Anthropic deemed the model dangerous enough that it is only being made available to a consortium of cloud (Google, AWS), networking (Cisco, Palo Alto Networks), cybersecurity (CrowdStrike), and operating system (Microsoft, Apple, Linux Foundation) companies, along with JPMorganChase, in an effort named Project Glasswing.
Mythos is likely to be a much larger model than Opus, given the size of its benchmark gains and the lack of leaks (or even rumors of) breakthroughs in fundamental AI research. Roughly, larger models require supralinear additional compute for the same number of output tokens, offset by the efficiencies inherent in the model being smarter about what tokens they generate.
It was no surprise to us that the Mythos revelations brought the White House back to the negotiating table, with the Office of Management and Budget (OMB) notifying agency officials that it was working on making Mythos available to agencies. We think this development significantly undercuts the already-tenuous justification for designating Anthropic a Supply Chain Risk ineligible for use on Department of Defense contracts.
The speed of rapprochement between Anthropic and the Trump administration lends credence to our longer-term predictions about the overlap of US AI and cyber policy:
- Foundation models would eventually be treated as seriously as export-controlled munitions (see 2024 AI Preview), and the frontier labs are at risk of functional nationalization through the Defense Production Act (DPA) or other means.
- The US government is an underappreciated source of AI inference and related compute demand, of which cybersecurity offense and defense are only the tip of the iceberg (see 2025 Cybersecurity Preview, 2025 AI Preview, 2026 AI Preview).
Investment Implications
We think the exclusion of 3rd-party agent harnesses from fixed-price AI contracts is the ‘beginning of the end’ of the era of inexpensive experimentation and skunkworks efforts to remake all kinds of companies from within. Just like the loss-leader contracts of the early cloud transition, we expect that some enterprises developed token-profligate practices that will not be sustainable without significant re-engineering.
As a first-order effect, service providers and consultancies with relatively mature AI implementation practices become more valuable as the risk of cost overruns from internal efforts rises. Additionally, 3rd-party providers of metered inference, including open-weight models and frontier models from Chinese labs (e.g., Deepseek, Moonshot.ai), have a better value proposition as US frontier model costs ramp up faster.
Additionally, tightened token limits on fixed-price contracts make it proportionally less likely that SaaS customers will vibe-code internal replacement systems. We view the ‘SaaS-ocalypse’ and the resulting sell-off across software companies as overdone, especially in regulated verticals or other contexts with significant institutional inertia.
Lastly, we should not ignore the implications of Mythos’s jump in capabilities. That jump means that the scaling laws are not yet dead; AI has not peaked in any meaningful sense. Demand for memory, GPUs, servers, and the data centers to house them, not to mention the electricity necessary to power them all, is unlikely to abate.
The air may come out of the ‘agentic’ bubble as frontier labs and brand-name, model-agnostic providers such as Perplexity.ai make agent harnesses available in their products without calling the functionality by that name. The magic of an agentic loop is simply a piece of software that stands between the user and the AI, chatting with the LLM and executing at its suggestion. The downside of that simplicity is that agent loops are absolute token-burners.
We will know the trends above are accelerating if and when other frontier labs begin restricting agent access to fixed-price contracts or otherwise limit the contracts’ effective value, when the US government moves more directly towards access or export controls, or if discussion of nationalization moves firmly within the famed Overton window (the range of subjects and arguments politically acceptable to the mainstream population at a given time). What is clear is that the faster things move, the more the details matter. Capstone continues to monitor these developments and their underappreciated impacts across the AI ecosystem and multiple sectors for our clients.
Read more from Capstone’s TMT team:
The Growing Compliance Pressure Facing Data Brokers
Third-Party Web Scrapers Poised to Benefit from Growing AI Training Data Demand
The Growing Antitrust Risk for Pricing Software Firms




























