Is Dual-Use Tech Entering a High(er) Risk Period? Outlook for the Year Ahead

It is not uncommon today to find warnings about the looming impacts of an Artificial Intelligence (AI) industry downturn. A recent feature in Project Syndicate, for example, brought together scholars to present their views on the possibility of a financial crisis in 2026. In part, the AI sector played a role in visions of potential crisis – though, how central this role was envisioned to be depends on the author in question, with other factors like the health of the international trade system at play.

In a previous article, I laid out plausible outcomes for the defense industry if there is an about-to-burst AI bubble, focusing on the restricted pool of available capital that would likely result, and the steps defense contractors should consider taking in this scenario.

As we enter 2026, risks originating within – and adjacent to – the AI industry are top of mind. Indeed, a “vibe shift” of sorts does appear to be underway beyond the economic risks, with outlets like the MIT Technology Review publishing a collection of articles this month under the theme of “hype correction.”

If there is a downturn stemming from this high-technology sector in the next year, it is likely because of the roundabout developmental path its most famous variants – Large Language Models (LLMs), themselves built upon only a subset of the subfield of deep learning. Unlike a number of modern technologies whose maturity is now taken for granted, this form of AI found itself being developed primarily outside of the U.S. Department of Defense with an unusually abrupt transition to commercialization.

Yet, a downturn of this kind may merely be the most visible of the risks facing American high-technology industries. This is not to downplay its potential impact. Rather, the risks for the year ahead are coinciding with seeds of institutional and attitudinal shifts both within and beyond the U.S. federal government. These shifts principally concern the necessity and time-sensitivity of sustained research & development for technological maturation.

Parallel reasoning concerning the affordability and scalability of high-tech-enabled platforms can be found in support of products and services in the commercial domain as well as in the defense domain, both impacting perceptions of the usefulness of traditional bureaucratic processes.

In sum: any potential AI industry downturn will coincide with both these institutional and attitudinal shifts as well as broader economic (domestic or international) constraints. They at least merit the attention of forecasters and other analysts.

Disordered Development

LLM vendor OpenAI was among the recipients of $200M (ceiling) contracts awarded by the U.S. Chief Digital and Artificial Intelligence Office (CDAO) this year. Earlier, in December 2024, OpenAI partnered with defense firm Anduril to build AI models that can quickly synthesize time-sensitive data, improve situational awareness, and otherwise reduce the cognitive loads of human operators. Earlier still, in April 2024, Microsoft representatives – reportedly – spoke with U.S. Defense Department officials on the possibility of, in part, training battle management systems on AI-generated imagery through its DALL-E image generators.

Despite these defense engagements, however, OpenAI is first and foremost a provider of commercial applications. The maximum value of the contract for OpenAI awarded by the CDAO indeed represents quite a small portion of the company’s annual revenue. OpenAI became a household name instead because of its release of web-based consumer application ChatGPT-3.5 in November 2022, followed by successive models of new generations, including GPT-4, GPT-4.5, GPT-5, and the company’s “o-series” models (AKA, the “reasoning” models).

Prior to this release, the “Generative Pre-trained Transformer” architecture, or GPT, was developed by Google researchers in 2017. Only later was this architecture put to real use by OpenAI, beginning with GPT-1 in 2018. (Worth noting is that OpenAI staffers did not appear to expect ChatGPT-3.5’s release in November 2022 to be a particularly significant moment.) The rest is history.

Generative AI – the term loosely used to refer to models that generate new data of a kind similar to what they were trained on – notably caught the attention of defense organizations significantly more after ChatGPT’s release than before (which is not to say AI was not paid its dues in other forms). Its release inaugurated a general enthusiasm for applications built via this underlying technology, up to and including the recently announced GenAI.mil hosted by the U.S. Department of Defense – a platform that can be queried in natural language for U.S. defense personnel, underpinned by Google’s Gemini for Government.

Intelligence, Like Drones, As Cheap and Scalable

Interestingly, this recent period has seen the employment of parallel reasoning in service of dual-use commercial and defense technologies, particularly operating either in or around Generative AI.

In June of this year, OpenAI CEO Sam Altman made a prediction: “Intelligence too cheap to meter is well within grasp.” The sentiment behind this prediction: that the keys to reproducing the intellectual abilities once thought exclusive to humans are within reach, and the price of doing so is dropping.

So says this view: Soon, intelligence will be a cheap, highly scalable commodity. The laborious processes that once defined human economic activity will be  a relic of the past, made obsolete by intelligent machinery.

Altman’s sentiment mirrors current debates about defense procurement and emerging technologies in the United States. Specifically, the mass wins side of the mass vs. quality debate employs strikingly similar reasoning.

On this view, it is not the “exquisite” platforms that will win out in a near-peer or peer-to-peer conflict; it is mass. Sheer volumes of scalable and cheap-to-build platforms – like uncrewed drones – is the key to future victory.

This reasoning is pervasive today across the defense technology world. Anduril Industries President and Chief Strategy Officer Christian Brose told The Wall Street Journal in July that the American military needs capabilities that are “mass-producible, that is adaptable, that is scalable and that is fundamentally replaceable when, God forbid, the war doesn’t end in the first hours or the first month…” The ever-increasing expenses of next-generation fighter jets, he says, illustrate the pitfalls of pursuing exquisite platforms over a spectrum of more and less disposable platforms.

So says this view: Soon, force structures will be defined by cheap, highly scalable platforms. The laborious processes that once defined defense procurement will be a relic of the past, made obsolete by intelligent, mass-producible systems.

Going Forward, Then Backward

The sequence of events that led to this parallel reasoning rising to absolute prominence in the U.S. commercial and defense high-tech industries is, to be sure, historically unusual.

Defense analyst Mary Cummings, for her part, wrote an intriguing piece in September arguing for a prohibition on the use of Generative AI in any form of weapon control. While this view is debatable, she makes an important argument alongside it: the rapid commercialization of Generative AI models effectively cut short a process that would otherwise have allowed researchers to better understand how and why these models function as they do in specific circumstances and better come to grips with their limitations in real-world domains before widespread deployment.

Generative AI, in this sense, failed to proceed through the mid-rungs of the U.S. Defense Department’s Technology Readiness Levels, during which technological maturation occurs – often, for a period of years.

The implication is that Generative AI technologies in particular – a genuinely impactful set of technologies – went forward into deployment before they were prepared. As such, a step backwards – towards technological maturity – may be imperative before they can continue going forward into sensitive commercial and defense domains.

Institutional and Attitudinal Shifts

The risks currently associated with the AI industry going into the new year thus take on new light against this backdrop. The short-circuiting of Generative AI’s technological maturation in part is responsible for its mismatch between perceived and actual real-world impact.

That said, an AI bubble and its effects on the U.S. financial system is merely the most visible risk, and potentially the less interesting one in the long-term. These developments in the AI industry coincide with potential shifts in the U.S. federal government’s stance on research & development and defense acquisition. These shifts, in turn, coincide with any forthcoming economic contractions.

Certain events are notable in this context. Consider how, during the U.S. federal government shutdown this year, the Trump administration covered troop pay with a diversion of unspent Defense Department funds. Specifically, $2.8 billion in unspent shipbuilding and department-wide research & development funds were diverted to cover the expense.

The final outcome of this diversion is not my immediate concern here, and U.S. budgetary matters in this regard are better assessed by my colleagues Shaun McDougall and Richard Pettibone.

What is notable is that, when push came to shove, the activities considered expendable were in fact those dedicated to shipbuilding (itself a longstanding concern among American defense analysts) and research & development. While no dramatic implications are to be drawn from this, it potentially augurs a shifting sentiment about activities like military R&D and, in particular, the necessity or time-sensitivity of such activities.

This data point is not alone. The U.S. Defense Department released a new Acquisition Transformation Strategy in November. This new strategy replaces Program Executive Officers (PEOs) with Portfolio Acquisition Executives who would have greater authority to intervene on programs within their portfolios without waiting for lengthy bureaucratic approval processes, along with longer tenures than PEOs.

One particularly interesting aspect of the Strategy highlighted by Secretary of Defense Hegseth is that Portfolio Acquisition Executives could – abruptly shift funding from a “faltering” program to “accelerate or scale a higher priority” within their portfolio.

This new authority has the benefit of avoiding the kinds of entanglement that historically strangled ambitious military R&D programs. As my colleague Anna Miskelley and I wrote in October, the U.S. Defense Advanced Research Projects Agency’s 1983-1993 Strategic Computing Initiative – a major initiative to build out the future of computing in defense – possessed such an entangled structure. This Initiative was structured in such a way that individual programs were planned out according not only to their own progress, but also on the interdependent progress of other programs. A lag in one program caused a lag in others. This structure contributed in part to Strategic Computing’s downfall.

The authority of Portfolio Acquisition Executives to make a judgment that programs are failing, and they are not worth the accumulation of sunk costs, could be a means of avoiding similar issues in U.S. military R&D, and wisely so.

Yet, these Executives will not operate in a vacuum. Defense technology startups have – rightly – captured the attention of Defense Department leadership. Momentum is on the side of “mass wins.” Platforms and equipment – like intelligence in the commercial domain – are increasingly understood as best designed for scalability and, often, disposability. Individual decisions in the allocation of funds, should the Acquisition Strategy be implemented successfully, potentially run downstream from the sentiments most clearly expressed in the commercial domain by figures like Altman.

Outlook: What Is Familiar Is Not Guaranteed

Whether there is an AI bubble, and whether this bubble bursts in 2026, is an important risk for the defense industry. Such a burst will affect defense contracting, especially through the pool of available capital sloshing around the industry and flexibility of spending.

Yet, this risk is merely the most visible. Any outlook for the year ahead should consider that institutional and attitudinal shifts, principally in the United States, are coinciding with potential economic turmoil. Emerging perspectives on research & development, and the necessity and time-sensitivity therein, would collide with exogenous economic and political (domestic or international) events that threaten to otherwise constrain U.S. capabilities and reach, technologically or otherwise.

To be sure, it could turn out that 2026 will not see these risks manifest, either in whole in part. It is possible that current talk of an AI bubble is overblown, and a normal period of market correction substitutes for what was previously feared to be a system-wide fallout. Similarly, it is possible that the eventual passage of the U.S.’s annual defense authorization bill signals continuity in federal government spending and political priorities rather than departure, including in areas mentioned like shipbuilding.

Still, complex institutional and attitudinal patterns have a way of looking permanent until they are not. The unusual developmental path that Generative AI and its adjacent technologies have taken – and the noticeable parallels between the commercial and defense reasoning employed in their support – at least merit the attention of forecasters. None of these trends occur in isolation, and the flaring up of one will likely collide with others. There is wisdom in preparing for the worst, and the unfamiliar.

Vincent Carchidi
Website |  + posts

Vincent Carchidi has a background in defense and policy analysis, specializing in critical and emerging technologies. He is currently a Defense Industry Analyst with Forecast International. He also maintains a background in cognitive science, with an interest in artificial intelligence.

About Vincent Carchidi

Vincent Carchidi has a background in defense and policy analysis, specializing in critical and emerging technologies. He is currently a Defense Industry Analyst with Forecast International. He also maintains a background in cognitive science, with an interest in artificial intelligence.

View all posts by Vincent Carchidi →