
Not long ago, it was fashionable to deride Artificial Intelligence (AI) as merely another example of Natural Stupidity. Recent technical advancements have made the sentiment unpalatable. No one wants to be seen as a laggard.
AI is now a staple of defense innovation: In June, the U.S. Chief Digital and Artificial Intelligence Office (CDAO) awarded OpenAI with a ceiling (fixed-price) $200M contract for the development of “prototype frontier AI capabilities” in both “warfighting and enterprise domains,” followed by equivalent contracts for Anthropic, Google, and xAI in July. RTX conveys a spirit of modernization in its partnership with Shield AI, aiming to integrate the latter’s Visual Detection and Ranging software with RTX’s Multi-Spectral Targeting System family of surveillance, tracking, and laser designation systems. France-based Thales makes a point of noting upgrades to its laser designation pod TALIOS through the use of “advanced AI algorithms” housed by the onboard “Thales Neural Processor.”
The message they all send: Artificial Intelligence is living up to its name.
But is this the full story?
The DARPA Perspective
The question depends on having the right framework in mind—what is AI in defense for?
Fortunately, the U.S. Defense Advanced Research Projects Agency (DARPA) provides one. The Agency historically viewed AI in terms of human-machine symbiosis; a technology whose maturation means these systems “will function more as colleagues than as tools.” This theme traces back to DARPA Information Processing Techniques Office head J.C.R. Licklider who, in 1960, wrote a manifesto on “man-computer symbiosis.”
The perspective was crystallized by former DARPA Information Innovation Office Director John Launchbury in 2017, roughly five years into the deep learning revolution. He divided AI development into three waves: beginning with the First Wave in the twentieth century, each wave successively increases the symbiosis between humans and machines, the latter an ever-more capable partner in the conduct of war.
The approach situates the field today in the Second Wave—dominated by models that fundamentally learn through statistical associations of increasingly massive datasets, building a ‘model’ of correlations between data points that is then used by the system to generate outputs. Large Language Models built via the transformer architecture fit this basic description.
Launchbury’s framework envisions a Third Wave that is yet to fully manifest. The goal is an adaptive form of autonomy, perhaps bound to a given mission but capable of reasoning through unseen problems in the course of executing it.
DARPA devoted $2 billion in 2018 to the (now-completed) AI Next Campaign, a project explicitly devoted to, in part, “pioneering the next generation of AI algorithms and applications, such as ones featuring…common-sense reasoning.” The program reaffirmed DARPA’s vision of AI as an intelligent partner.
Integrating the First Two Waves of AI
DARPA’s Project MSL-05 / AI and Human-Machine Symbiosis is carrying out the fundamental basic and applied research of relevance to this longstanding vision. One program in MSL-05 of particular interest took root in FY23: the Assured Neuro Symbolic Learning and Reasoning (ANSR) program.
FY23 | FY24 | FY25 | FY26 |
9.620 | 14.000 | 19.000 | 4.00 |
Source: Department of Defense FY26 Budget Estimates, Defense Advanced Research Projects Agency, Justification Book Volume 1 of 5, RDT&E, Defense-Wide, June 2025
Note: The decrease from FY25 to FY26 reflects the program’s realignment into MSL-05 and, likely, its near-completion.
ANSR makes a hypothesis: the deep integration of architectures and algorithms developed in the First Wave (Symbolic) and Second Wave (Neural) will yield systems that overcome persistent and fundamental limitations in state-of-the-art AI systems today. This integration – called Neuro-Symbolic AI – will lead to systems capable of generating robust outputs born of a sufficiently adaptable process, generalizing to new situations beyond proximity to their original training datasets, and providing evidence for assurance and trust during operation in mission-critical situations.
ANSR’s hypothesis effectively answers the question: Which techniques will manifest the capabilities of an intelligent machine partner?
ANSR’s existence is also telling: Were the future of AI as an intelligent partner in defense secured, it—and other programs like it—would be unnecessary.
Smart Money and the Future
AI RDT&E is not pre-determined. No techniques for the Third Wave are guaranteed to manifest nor are candidates like Neuro-Symbolic AI without their shortcomings and limitations.
That said, the commercial orientation towards Neural systems today (popularized by transformer-based Large Language Models) gives the misleading impression that ‘smart money’ is on the bundle of techniques represented by this paradigm. For a field whose identity revolves around the rapid pace of change, it is odd to find such certainty in the limitlessness of Neural systems’ trajectories.
DARPA is indicating, through programs like ANSR, that this certainty is unfounded.
Each of the companies implicated in the CDAO’s recent spate of major AI firm contracts specialize in what may be seen as a form of AI that the DoD has not (yet) fully exploited. As the DoD races to scaffold its operations with these Second Wave models, the work RDT&E carried out within both ANSR and DARPA’s broader MSL-05 has continued and expanded parallel to the increase in commercial R&D for Second Wave systems.
The work of MSL-05 thus fills a void that commercial enterprises – owing to a different set of priorities – have not addressed to a satisfactory degree. The ANSR June 2022 Broad Agency Announcement is explicit on this point. Little-noticed scholarship published in public forums reinforces the point. The commercial opportunity is nevertheless there for the taking.
Third Wave AI is the vision for a truly collaborative defense technology brought into interaction with the maturing techniques of paradigms like Neuro-Symbolic AI. Smart money bets that, sooner or later, the techniques will come to fruition. It is merely a question of when, and by whom.
Vincent Carchidi has a background in defense and policy analysis, specializing in critical and emerging technologies. He is currently a Defense Industry Analyst with Forecast International.
Before joining Forecast International, Vincent was a Non-Resident Scholar and Affiliate of the Middle East Institute’s Strategic Technologies and Cyber Security Program, where his work focused on U.S.-China technology competition and the role of the Middle East therein. He has also served as a Non-Resident Fellow with the Orion Policy Institute’s Cyber Security & Information Technologies program.
Vincent has published diverse research, including a co-authored public response to the U.S. Office of Science and Technology Policy’s Request for Information on an AI Action Plan. His published research also touches on areas including Neuro-Symbolic AI and its role in U.S. defense research and development. Some of Vincent’s other work appears in Defense One, War on the Rocks, and Military Strategy Magazine. He maintains an academic background in cognitive science and AI that informs his work.