The AI Bubble in Defense: What to Expect in a Burst

AI Bubble Conceptualization (Gemini)

If Artificial Intelligence (AI) is being developed in a financial bubble, and if this bubble bursts, several defense-related trends may be set into motion.

First, defense contractors selling AI products or services may revert to an earlier tendency to re-package AI technologies under more palatable monikers, like “data analytics.”

Second, national defense organizations in the United States and elsewhere may take up the mantle of basic AI research & development suitable for their own ends, capturing the value left in the bubble’s wake.

Finally, defense contractors – facing a smaller pool of available capital to fund AI R&D – would face greater pressure to tailor their offerings to the mitigation of shortcomings in state-of-the-art AI models for specific defense applications.

Dual-Use, Dual-Risk

AI is a dual-use technology; it can be applied to tasks or problems in either commercial or defense domains. The most impactful breakthroughs in AI research & development over the past fifteen-odd years were made by private actors operating within the commercial domain. These breakthroughs have allowed AI to be applied to a wider range of applications. While most of these applications are commercial – meant to increase the productivity of a company’s workforce and, with it, its marginal gains – defense organizations have accelerated existing efforts to identify use-cases for AI models and apply them in compliance with existing or new organizational standards.

Talk of an AI “bubble” naturally intersects with these trends. Indeed, I have written on this topic twice before, first in 2022 and then again in 2023. In October, most investors surveyed by Bank of America believe AI stocks are currently in a bubble.

But what does this mean?

In two sentences: To claim that there is an AI bubble is to claim that private investments in AI and the valuations of AI companies are massively inflated relative to the financial gains accrued by the actual demand for AI. To claim that this AI bubble will “burst” is to claim that the valuations of AI companies and private investments will drop precipitously, thereby diminishing economic interest in AI, and with it, its pace of development.

AI bubble talk has been largely confined to the commercial domain. To be sure, there is some discussion that parts of the defense industry are currently operating in a bubble-like environment. Rheinmetall CEO Armin Papperger said in September that actual demand for military drones, and the revenue accrued by actual demand, may be lower than it common wisdom holds.

Concretely, this would not entail that drones are not in demand. As my Forecast International colleague Andrew Dardine recently surveyed, the international market for military drones – and for a multi-layered “drone wall” for national or regional protection – is sprawling, touching battlefields in Ukraine, European NATO members, the U.S. military, and Taiwanese, Israeli, and Australian defense customers, among others.

The bursting of a bubble in the international drone market would instead lead to a drop in investment in the technology’s development as the market enters a period of correction. Drones would still be in use and in development, though the latter would proceed more slowly, less publicly, and more deliberately. The “pain” often referenced in such market corrections is that some firms will lose out in the drone market; such is inevitable in a competitive market.

Prospects for After the Burst

Whether there is an AI bubble is not our concern here. We instead ask: if there is a bubble, what might its bursting look like specifically in defense?

First, AI’s previous downturns are instructive: in the 1990s and early 2000s, for example, “Artificial Intelligence” was something of a dirty word; businesses were not keen to advertise their products as “AI” given the technology’s association with disappointing returns on investment. Instead, businesses resorted to advertising the technology under new pretenses: rule-based automation and software agents, as historian Thomas Haigh details, were more palatable.

Should there be an AI downturn today, some defense contractors may revert to this earlier habit of selling software and software-defined systems under monikers like data analytics, predictive data analytics, automated data analysis, and the like. (In some cases, these reversions will be a return to accuracy.) The mantra of this corrective period would be: less hype, less public, and more deliberate.

Second, national defense organizations may take up the mantle of basic AI R&D in place of commercial enterprises. Indeed, U.S. federal agencies, including the Advanced Research Projects Agency (ARPA), were historically responsible for catapulting a scattered collection of research efforts to replicate human cognitive abilities into a recognizable subfield of computer science called “Artificial Intelligence.” Even the first practical implementation of an artificial neural network – Frank Rosenblatt’s Mark I Perceptron – was the result of image recognition work funded by U.S. defense agencies in the pursuit of automatic target recognition capabilities.

An AI downturn today could see a reversion to this role of national defense spending, in effect picking up the pieces to capture the value left behind in the bubble’s wake.

Two curveballs, however, stand out against historical precedent.

First, the U.S. government is not the only game in town. More specifically, other governments possess the capital and/or the willingness to put their weight behind basic AI R&D where commercial enterprises fall short. China is the cliché candidate for American substitution. India, should it ramp up its basic R&D spending as a percentage of GDP, is another. Gulf states including the United Arab Emirates and Saudi Arabia with deep-pocketed sovereign wealth funds are others still.

Additionally, while the U.S. government has the necessary capital, a willingness to act in earnest in the event of a downturn may be in short supply. U.S. federal spending and support (e.g., foreign talent attraction and retention) for basic scientific research is and has been waning.

The caveat is that AI specifically does have the support of the federal government. This includes a proposal for a small increase in AI research funding and the expansion of AI computing resources at the National Science Foundation and Department of Defense (DoD) contracts with the major Large Language Model providers – OpenAI, Google, Anthropic, and xAI. The White House’s AI Action Plan released in July 2025 is likewise a full-throated endorsement of American AI leadership.

That said, AI has traditionally benefited from a diversity of research efforts that, in isolation, may have looked to be of little promise (as Haigh also details, it is during AI downturns that techniques are developed which lay the foundation for the next AI booms). It is not clear to what extent the U.S. government is willing to put its weight behind less known techniques in AI as opposed to fleshing out the scope of mainstream techniques, in this way merely following the lead of commercial enterprises who R&D incentives are misaligned with long-term defense planning.

What Should Defense Contractors Do?

The AI bubble bursting may lead to a smaller pool of available capital with which to engage in AI-related research & development for defense applications. In this scenario, contractors aiming to earn U.S. DoD or other military contracts for AI products and services should target outlier problems in state-of-the-art models, rather than rehashing existing approaches. The most impactful AI R&D in defense will be that which bridges the gap between what is currently possible and what needs to be possible.

This means that contractors should identify two sets of problems: (1) problems which state-of-the-art models have not meaningfully impacted; (2) problems of which state-of-the-art models have solved 90% yet struggle with the remaining 10%. In both cases, AI R&D is tailored to identifiable shortcomings in need of technical or conceptual solutions. Unfamiliar academic partnerships with more traditional industry contractors should likewise be on the agenda given academia’s work at the peripheries of current systems’ capabilities.

A regulatory interpretation system under a DARPA contract for development is an illustrative example, particularly of (2). Consider the following problem: the automated extraction of regulations and policies and that are interpretable by humans who need the information in the absence of legal expertise. Consider a partial solution: a transformer-based Large Language Model, like OpenAI’s GPT-5, which can provide quick analyses of complex and dense texts in fluent natural language. Further consider, however, that such models are likely to generate incorrect answers and miss the nuances required for explaining regulatory rules. Thus, as the company is pursuing, something else is needed: a blending of a logic-based computation with language models to preserve the fluent output of the language model while boosting its accuracy and fidelity.

Relatedly, contractors should devote additional attention to the distinction between automation and autonomy. As DARPA Program Manager Alvaro Velasquez noted in 2023, tasks like object detection or waypoint navigation may use technologies within “AI,” but these are ultimately examples of automation. A system in possession of autonomy is different: this system can handle uncertainty in its local environment through real-time adaptation while retaining a robust level of performance. Such autonomy R&D is present in the human-machine symbiosis efforts at DARPA where contextual adaptation is a longstanding goal.

An AI downturn would heighten the focus on these aspects of AI in defense. Contractors focused on them have an advantage in the event of a bubble bursting.

Vincent Carchidi
Website |  + posts

Vincent Carchidi has a background in defense and policy analysis, specializing in critical and emerging technologies. He is currently a Defense Industry Analyst with Forecast International. He also maintains a background in cognitive science, with an interest in artificial intelligence.

About Vincent Carchidi

Vincent Carchidi has a background in defense and policy analysis, specializing in critical and emerging technologies. He is currently a Defense Industry Analyst with Forecast International. He also maintains a background in cognitive science, with an interest in artificial intelligence.

View all posts by Vincent Carchidi →