
Echoes of Project Maven
This past January, Reuters reported that American Large Language Model (LLM) developer Anthropic had reached a standstill in its dealings with the U.S. Department of Defense (DoD). Talks occurring between Anthropic and the DoD under the auspices of a $200 million contract awarded to the company (and a handful of others) in July 2025 are at odds over the guardrails – or lack thereof – imposed on the technologies Anthropic is developing for DoD applications. Specifically, according to the Reuters report, the company is concerned about the use of its technology to “assist weapons targeting without sufficient human oversight” and to conduct domestic U.S. surveillance.
What this practically entails is somewhat unclear. The July 2025 contract specifies only that Anthropic is to develop “prototype frontier AI capabilities” for both “warfighting and enterprise domains.” It is not, to be sure, particularly difficult to imagine what these applications might be, given the technology Anthropic is known for – generative LLMs that flexibly generate data bearing resemblance to their training data – but this would be mere speculation, not to mention likely diffuse.
The underlying tension between the needs of the DoD and major LLM developers likewise popped up in a separate Reuters report indicating that the DoD is pushing these firms to make their AI products available for use on classified networks without the restrictions that these firms would normally impose on consumers. (This is, for what it is worth, consistent with the spirit of the DoD’s latest AI Acceleration Strategy). Classified networks, the report notes, can be used to engage in activities such as mission-planning – and Anthropic’s “Claude” model was made available on classified networks, but only through third-party platforms including Palantir’s Artificial Intelligence Platform (AIP) and Amazon’s Top Secret Cloud, and subject to Anthropic’s usage policies.
Anthropic has gone out of its way to brand itself as a company concerned with AI safety – an umbrella term for concern with the potentially harmful uses of AI tools, varying in the level of severity of harms a company or individual chooses to focus on (e.g., relatively mundane or catastrophic). It is therefore little surprise that their dealings with the DoD have bred discontent. Indeed, Semafor reported in January that when Defense Secretary Pete Hegseth said that month that the DoD “will not employ AI models that won’t allow you to fight wars,” he was specifically referring to Anthropic.
One might pause to appreciate the unusual nature of this developing relationship – although Silicon Valley is no stranger to historical bouts of discontent with the use of emerging technologies for defense applications (with Google’s work on Project Maven causing internal company protests in 2018), that the discontent has spilled into public view should not be neglected. And the generative AI technologies at the center of today’s disputes are substantially more powerful than anything envisioned for Project Maven.
The situation has escalated sharply in the past week. On February 13th, the Wall Street Journal reported that Anthropic’s Claude model was used in the U.S. military operation to capture Venezuelan President Nicolás Maduro – in contravention of the company’s usage policies – via a Palantir-owned platform in use by the DoD. (Recall that Anthropic’s model was made available on classified networks through third-party platforms.) The Wall Street Journal report is exceptionally light on details, noting only that the model was used and that it is possible – though unconfirmed – that other AI models were used, too.
Shortly after this reporting, Axios reported that Claude was used for the Maduro capture “during the active operation, not just in preparations for it.” This is, again, open to interpretation.
Semafor’s Reed Albergotti notes from conversations with some at Palantir that “Claude played no meaningful role in seizing Maduro…because the technology isn’t good enough yet to warrant so much concern.” Models like Claude that are used on Palantir’s AIP, he notes, are typically used for bespoke software applications. This would be consistent with the sharp rise in usefulness that some programmers have seen from Claude Code in recent months.
Nevertheless, in a further, sharp escalation, Axios separately reported that the Pentagon is considering canceling its July 2025 contract with Anthropic. Most interestingly, this report notes, citing an anonymous senior official in the DoD, that Anthropic reached out to an executive at Palantir about the use of Claude in Maduro’s capture – which a spokesperson for Anthropic “flatly denied.”
The most recent reporting indicates that Secretary Hegseth is considering something of a nuclear option: the severing of the DoD’s business ties to Anthropic and the designation of Anthropic as a “supply chain risk” – an extraordinary measure that would force other DoD contractors to cut ties with the LLM vendor.
Unusual Dynamics in an Unusual Time
Like the July 2025 contract, details on the exact uses of Claude in or during the Maduro capture remain limited.
There is, to be sure, one somewhat speculative line of thought worth articulating: if the DoD wanted to put pressure on Anthropic – and the other major American LLM vendors – in their ongoing dispute over appropriate safeguards, then sharing with reporters that Claude was used to cause direct harm is perhaps the most effective strategy for achieving this.
Recall that Anthropic brands itself as a safety-concerned AI firm. Although company brands and company actions are not always in alignment, safety researchers at Anthropic appear sincere in their beliefs, if prone to Silicon Valley’s familiar bouts of AI-driven mystical fervor.
The news that Claude was used in the Maduro capture would therefore sting the company. It is not inconceivable that this was the point of such information being shared with the Wall Street Journal and Axios. The “use” of a model during or in preparation for a high-stakes operation like the capture of Maduro could, in the absence of further details, mean practically anything. Thus, it might be worthwhile to consider that the purpose of the ongoing leaks to various publications is not necessarily these uses, but their effects on the relationship between LLM vendors and the DoD going forward. Indeed, Axios itself cited an anonymous source suggesting that the DoD was keen on waging this particular fight publicly given the extent of private frustration with Anthropic.
(Heightened) Senses of Capabilities
It is likewise important to consider another angle: as discussed previously, dual-use technologies are in a highly unusual period of development and deployment in which the reasoning underpinning commercial deployments in the United States are mirrored to a striking extent in defense acquisition. Interestingly, this dynamic is most pronounced in AI, where the DoD indeed takes for granted that the private sector will develop the fundamentals of this strategic technology and proceed to adopt it without delay; something of a reversal from a number of past foundational technologies.
One result of this unusual dynamic is that DoD leadership finds itself in alignment on the perceived capabilities of LLM-based technologies with private-sector actors like Anthropic – disagreements stem from their appropriate uses.
It is therefore useful to bear in mind, as Anthropic and the DoD tussle over these uses, that the actors involved in these unusual dynamics have acquired senses of these models’ capabilities that may or may not reflect reality. The models are undoubtedly among the most powerful digital technologies ever created. But capabilities in a void – detached from the scaffolding and restrictions that characterize safety- and mission-critical systems – often fail to translate into widespread, enduring, practical impacts.
(That the company’s coding assistance model, Claude Code, is embedded within a kind of scaffolding conducive to coding assistance or automation makes it less surprising that this model is proving popular with Enterprise customers; if something works, it works.)
Something akin to a Frankenstein’s Monster effect is therefore manifesting in the AI industry, where the expectations set for the technology’s capabilities in the private sector (such that if one fails to adopt it, they will fall behind irrevocably) are now firmly adopted by DoD leadership such that the former must navigate its implications.
Will Anthropic Survive the DoD?
Anthropic is currently experiencing a sharp, even meteoric rise in the commercial domain after having lagged competitors like OpenAI for much of the past three years. Just days ago, the company raised $30 billion in Series G funding at a $380 billion valuation. The run-rate revenue (an extrapolation of annual revenue based on shorter-period earnings) for Anthropic’s Claude Code reached more than $2.5 billion in February 2026, having more than doubled since the year began. Importantly, Enterprise use accounted for more than half of Claude Code’s revenue – this figure is significant given the greater sensitivities around the use of models for Enterprise work rather than individual consumer uses.
The Claude Code model ranks high in the competition between itself and other LLM vendors to provide a generative AI tool that has high-impacts in specific domains, rather than general-purpose, use-it-as-you-need-it tools.
Yet, this puts the company in a fragile position with respect to the DoD. The U.S. defense bureaucracy is no trivial customer, and missteps today could amount to hundreds of millions of dollars or more lost in the immediate years ahead. Some major American LLM vendors have struggled, to the point of cliché, with achieving profitability given the enormous expenses of training state-of-the-art models and deploying them at scale (inference costs, or the costs accrued by having a model generate outputs over millions of separate instances). Lucrative sources of revenue, like longer-term contracts with the DoD, therefore cannot be dismissed out of hand.
Anthropic’s leadership may judge that its share of the commercial, particularly Enterprise market, for LLM-based services will be sufficient to achieve its longer-term financial interests. In this scenario, it may choose to forego further cooperation with the DoD as it seeks to enforce its usage policies. This will have been a conscious decision that judges the commercial domain sufficient to cover enormous technical and infrastructural costs – other LLM vendors would take notice.
In the immediate term, if Anthropic scarified its DoD relationship, and if the DoD designated Anthropic to be a supply chain risk, then the AI ecosystem would undergo a partial fragmentation that contradicts must of the spirit of diffusion of the past three-plus years. Other DoD contractors – including competitors of Anthropic, like Google, OpenAI, and xAI – would have to cease usage of Anthropic’s models for their own purposes.
Some of this, to be sure, has already been decided by Anthropic itself prior to its public DoD dispute. Anthropic cut off OpenAI’s access to its Application Programming Interface (API) for Claude Code in August 2025, as OpenAI was using Claude Code for technical work related to its then-unreleased GPT-5 model. Similar moves followed for other competitors, like xAI, in January 2026.
Should the DoD make the supply chain risk designation, Anthropic would find itself in a position where it must avoid the partial fragmentation from spreading; avoiding a kind of pariah status among relevant industry actors (including investors) who become spooked on the company’s longer-term financial health. This may be somewhat offset by increased consumer usage of Anthropic’s models spurred by alignment in personal values (though, it unclear how large or serious this consumer base is or could be; that the company has an apparent lead in AI coding assistance is no small factor here).
If Anthropic’s leadership judges that DoD contracts are too lucrative to forego, then the company will have to balance this (perceived) reality with the safety-focused concerns of its in-house talent – researchers who are in finite supply in a fiercely competitive industry.
This is no small obstacle, as any disruption to the momentum at Anthropic among its engineers and researchers could translate into pressure on the company’s ability to develop and market its models under the “concerned-with-safety” umbrella. The situation in this respect is far more pronounced and timely than it was with Google’s internal protests over Project Maven.
How Anthropic ultimately navigates its position in this environment, and how the DoD reacts in the event of chilled relations, may establish certain norms among other American LLM vendors in the immediate years ahead. The dynamic should be monitored carefully.
Vincent Carchidi has a background in defense and policy analysis, specializing in critical and emerging technologies. He is currently a Defense Industry Analyst with Forecast International. He also maintains a background in cognitive science, with an interest in artificial intelligence.

