January 2, 2025
By JB Ferguson, Head of Capstone’s Tech, Media and Telecom Team
The AI market will accelerate in 2025 in response to the US government’s efforts to build its AI infrastructure and roll back regulation. Amid this momentum, several dynamics will distinguish the winners from the losers. Notably, open-weight model developers like Meta and Mistral.ai will face underappreciated headwinds. In addition, US companies that develop AI models on behalf of EU customers face risks as enforcement of the EU AI Act begins later in the year.
Outlook at a Glance:
- US Government Poised to Emerge as Significant AI Buyer; Palantir, Anthropic Well-positioned for Government Partnership
- Meta and Other Open-weight Models to Face a More Challenging Regulatory Environment, Enforcement Risks Amid Efforts to Limit China’s Access to Models
- European Union AI Enforcement Remains Underappreciated, Creating Enforcement Risks for Alphabet, Meta, Microsoft, Others, Notably with Copyright Restrictions
US Government Poised to Emerge as Significant AI Buyer; Palantir, Anthropic Well-positioned for Government Partnership
Winners | Nvidia (NVDA), Palantir (PLTR), Anthropic, Amazon (AMZN) |
Losers | Meta (META), Microsoft (MSFT), OpenAI |
We think there is a clear path from Biden’s AI executive order (EO) and the resulting national security memo (NSM) to a sizeable government entry into the market for AI and associated datacenter capacity and hardware that will continue under the Trump administration. Besides the obvious tailwinds for Nvidia (NVDA) from government GPU purchases, the policy will benefit Palantir (PLTR) and Anthropic. The two companies have partnered with Amazon.com Inc.’s (AMZN) Amazon Web Services (AWS) subsidiary to provide the US Department of Defense (DOD) and the intelligence community (IC) access to Anthropic’s large language models, including Claude. We believe Meta Platforms Inc. (META) and the Microsoft Corp. (MSFT)/OpenAI partnership are less likely to benefit given Trump’s previous criticisms of Meta and the heavy involvement of OpenAI with the development of Biden’s AI policy.
The EO lays the groundwork by creating a chief AI officer and internal AI governance boards at agencies. Along with that change to leadership structures, the EO creates an “AI talent surge” for hiring expertise into the government with expanded use of excepted service positions as well as the US Digital Service and other shared tech expertise providers. The EO also expands immigration opportunities for AI experts, albeit without commitments to specific volume goals.
With that groundwork laid, the NSM lays out objectives of maintaining US leadership in the field and using AI in service of national security. In Section 1(g), the memo plainly lays out the need for shared AI resources (in contrast to the current siloed approach) and acknowledges that meeting this objective will require significant overhaul of government “organizational and informational infrastructure.”
The NSM goes on to direct the IC, DOD, and Department of Energy (DOE) to consider the applicability of large-scale AI to their mission whenever constructing or renovating their data centers. It also creates a DOE pilot project for federated AI and “frontier AI-scale training, fine-tuning, and inference.”
At a minimum, this would imply at least one federal AI datacenter on with over 10,000 graphics processing units (GPUs) and the necessary power (tens of MW) to drive them. In the national security context, we would expect this to be a new build or a full refit of an existing datacenter by Amazon or Oracle Corp. (ORCL). At a maximum, it would imply a full build-out of a parallel government AI infrastructure much like the adoption of cloud computing.
We believe this trajectory will continue in the Trump administration. Trump has acknowledged AI as a core component of US strategic competition with China. In addition, the US-China Economic and Security Review Commission’s (USCC) November 2024 report to Congress calls for funding a “Manhattan Project-like program” for winning the race to artificial general intelligence (AGI). Less ambitiously, we think the only way Elon Musk and Vivek Ramaswamy can achieve their objectives for the Department of Government Efficiency (DOGE) is to leverage AI on a whole-of-government basis to reduce headcount requirements, fight fraud, and re-engineer antiquated systems.
Meta and Other Open-Weight Models to Face a More Challenging Regulatory Environment, Enforcement Risks Amid Efforts to Limit China’s Access to Models
Winners | Alphabet (GOOGL), Anthropic, OpenAI |
Losers | Meta, Mistral.ai |
Capstone believes the regulatory environment for open weight models will get more challenging under the Trump administration. Open-weight models let anyone with the appropriate hardware run (and potentially modify) the model without oversight from the original developer. Critics of open-weight models believe that their broad availability will aid China advancing state-of-the-art AI in ways that minimize the need for cutting-edge chips restricted by US export controls. We think Trump’s hawkish view on China will embolden those critics.
Restrictions on open-weight and open-source models would be a headwind for Meta, which offers the most popular large open-weight model (Llama 3.2). Trump has previously threatened to throw Meta CEO Mark Zuckerberg in jail if the company interfered in the 2024 election, although they recently had dinner together at Trump’s Mar-a-Lago resort in Florida. Meta has rebuilt its reputation with significant portions of the developer community by continuous contributions of AI developments upon which new applications and startups can be built without paying royalties or sharing proprietary information.
We think the risks to Meta are in potential post-regulation enforcement matters, rather than direct financial consequences. It will be difficult for the company to control distribution of its models in a way that prevents technical violations that would give the Trump administration leverage.
Restrictions on open-weight models will present tailwinds for developers of closed models (Anthropic, Google, Open-AI), as AI-powered startups shift their activities to commercial providers.
European Union AI Enforcement Remains Underappreciated, Creating Enforcement Risks for Alphabet, Meta, Microsoft, Others, Notably with Copyright Restrictions
Winners | Mistral.ai |
Losers | Alphabet (GOOGL), Anthropic, Meta (META), Microsoft (MSFT), OpenAI, X.com |
Capstone believes US generative AI companies are at risk from enforcement actions when the EU AI Act goes into effect in August 2025. Much like the arc of the General Data Protection Regulation’s privacy controls, there was a flurry of concern around the AI Act’s passage followed by a sanguine belief that affected companies have found pathways to compliance with the law.
General purpose AI (GPAI), as defined by the act, include the popular large language model-driven chatbots offered on the market. Most have been trained with enough computing power to automatically be designated as posing systemic risks. Models in the latter category face required reporting, documentation, and testing requirements generally in line with accepted best practice, as well as post-deployment monitoring for safety incidents and misuse.
We think the most likely enforcement point is in the requirements for preserving copyright holders’ rights to restrict use of their content for model training. The act allows rights holders to opt-out of training datasets with machine-readable notifications such as the use of appropriate directives in the ‘robots.txt’ file available on most web servers.
The model developers are facing a wide variety of copyright suits in the US and elsewhere regarding their use of protected material. Much of this material was collated into training datasets by third parties that did not include opt-out notifications. Furthermore, the model developers’ content scraping bots are notorious for ignoring robots.txt or other opt-out notices.
The risk to model developers is from the chilling effect of the first enforcement action. Much like ad-tech reactions to GDPR enforcement clients and investors will have concerns about the companies’ ability to develop new models without reference to protected material. In theory, a compliant approach would require either throwing out foundational datasets or an item-by-item evaluation of their compliance.
JB Ferguson, Head of Capstone’s Tech, Media and Telecom Team
Read more from JB:
The AI Safety Mirage
Harris’ Vision for Tech Policy
The Coming Tech Policy Escalation and Questions for the Balance of 2024