Where Is Artificial Intelligence in Defense Heading?

Image – Adobe Stock

(This is the second of a two-part series on AI in defense. You can find the first part here.)

Introduction

A white paper recently published by the Computing Community Consortium on Artificial Intelligence (AI) included the following, conspicuous passage:

Each paradigm was hailed as ushering in a new age of AI, each produced a series of transformative applications, and each was eventually superseded by one or more new paradigms that built on those previous insights. This suggests an obvious question: What’s next for AI research? That is, what comes after the current age of deep neural networks and foundation models?

There is an expectation that, in AI, a new day will come.

This expectation has roots in a field whose history is marked by periods of euphoria when the success of a new technique is believed to be the key to human-like intelligence, only for this hope to be cut at the knees by its failure to achieve it. As the Institute for Defense Analyses’ Robert Richbourg wrote in 2018:

However, history shows that overestimated potential has led to the frustration of unmet expectations and investment with little outcome…The time to pay attention to this warning is now…limitations of these technologies…need to become more widely understood by those who seek to apply “AI” to solve problems of national security.

In Part 1, we reviewed some of these limitations that are persistent across the two dominant approaches to AI: Symbolic AI (more prominent in the twentieth century) and Machine Learning (inclusive of the deep neural networks that are dominant today).

The (inexhaustive) list includes:

  • Unreliability
  • Failure to robustly adapt to novelty
  • Lack of explainability
  • Compute-, data-, and energy-intensiveness

It is the inability of AI-enabled machines to match the robust flexibility of human beings in novel situations with limited resources that historically leads to a rethinking of their trajectory. Such was the eventual result of the previously discussed Strategic Computing Initiative sponsored by the Defense Advanced Research Projects Agency (DARPA) in the 1980s.

Yet, these shortcomings are not identified in a vacuum, detached from the uses to which a machine is put. They are identified on the basis of the machine’s applications.

The question, “Where is AI heading?” therefore depends on another question: “What are the applications for which current AI systems fail and why do they fail?”

AI Research Is Driven by Organizational Needs

Against this, it is little surprise that DARPA marked its aspirational “Third Wave” in AI as one consisting of Contextual Adaptation. Models that meet this standard are those that can robustly deal with situations that lack resemblance to what they have previously encountered.

In this way, such Third Wave AI systems will better serve the end of collaboration with human warfighters; augmenting human ability, rather than replacing it. This is an organizational need. Technical research serves this end. The upshot is that, wherever AI research heads from here, it will be in reference to such organizational needs.

Although organizations differ in their needs and priorities, any attempt to construct systems suitable for sensitive or otherwise mission-critical domains must allocate resources to research targeting deficiencies related to robust adaptability with limited resources.

Possible Directions for AI

Taking a page out of the Computing Community Consortium’s white paper, there are several possible paths for AI.

Neuro-Symbolic AI: The most popularly cited example seeks to implement a deep integration of algorithms and architectures from the neural and symbolic traditions. The goal is to blend the flexible learning and pattern-matching abilities of neural networks with the precision, reliability, and compute-efficiency of symbolic systems. Such systems are typically conceived as more specialized than general, though researchers aim to instantiate a level of sophistication and reliability not found in today’s general-purpose systems within those specialized domains.

Forecast International has recently covered the Neuro-Symbolic research being done at U.S. defense agencies like DARPA. The combination of formal methods (logic, rules-based systems) with neural networks has also caught the attention of some in the U.S. Army.

Neuromorphic AI: This approach targets the hardware that underpins AI models, seeking to build hardware with structures that emulate human and animals’ neural tissue. This borrows the focus on emulating human brains via artificial neural networks and applies it to physical computing infrastructure.

Embodied AI: Existing AI systems are digital; software applications leveraged by humans with a physical, controllable presence. Some researchers characterize AI systems’ lack of a physical presence as a fundamental limitation that restricts them from accessing some aspects of learning and understanding. Only through direct interaction with the physical world, and in particular being able to manipulate physical objects and observe the effects of these causes, will AI systems attain a more human-like intelligence, so says this approach.

Multi-Agent AI: Much of the focus on AI today is about individual models, with the commercial trend revolving, until recently, around a single model increasingly capable of performing more and more tasks. Some, however, see promise in the construction of a collaborative ecosystem of specialized AI agents that each bring distinctive capabilities to the table while coordinating and interacting with one another to achieve complex and collective goals.

Note that these agents’ technical underpinnings are not pre-defined; they may be the “agents” discussed today or some future technology yet undesigned.

A Personal View, Defense and All

Which approach to AI is the most promising, and which will be most relevant to defense? Given the uncertainty of progress in a field marked by periods of disappointment, the most that can be offered is personal, though grounded, speculation.

My tentative vote goes to Neuro-Symbolic AI as the leading contender. Neuro-Symbolic AI has earned a kind of cliché status as the next fad in AI in popular coverage. It gets “clicks.”

However, Neuro-Symbolic AI is premised on a technical agenda of significant relevance to defense organizations: it is directly concerned with alleviating the most pernicious obstacles in the deployment of AI models in mission-critical situations. These principally concern accuracy and reliability, the ability to generalize to novel situations robustly with limited resources, explainability (human-interpretability), and compute-efficiency. Some academics have indeed singled out Neuro-Symbolic AI as the most promising manifestation of the field’s Third Wave.

Moreover, it serves defense-relevant ends in two ways. First, this research program is not premised on an approach that relies on attaining “artificial general intelligence” or something comparably hoity-toity. It targets specific deficiencies that hamper identifiable applications.

Second, it leverages two existing paradigms, each of which have already laid claim to distinctive capabilities: narrow performance guarantees, explainability, and compute-efficiency in the case of Symbolic AI and flexibility of application, scalability, and natural language processing in the case of Neural AI (machine learning).

Major efforts like the Golden Dome missile defense initiative, which my colleague Carter Palmer and I covered recently, are a potential case-in-point. There, should AI play a role in the interception of missiles during their (time-crunched) boost-phases, they must possess – upon deployment – an ability to interface with other components of the Golden Dome architecture, provide accurate identifications of a threat within 3-minutes, and be supported by sufficiently advanced hardware for rapid data-processing, among other tasks.

One could imagine a combination of different techniques ultimately underpinning deployments as sensitive as this. Neuro-Symbolic AI may afford the model the capability to generate a recommendation, say, about whether a given heat signature most likely represents a missile launch within the time constraints and with the precision this requires.

One (hypothetical, I stress) application of Neuro-Symbolic AI here would be to target the training data bottleneck – a likely lack of data to train a neural network on missile launches of various types, particularly during their boost-phase. In a process known as the “neurosymbolic cycle,” a model is trained on less data than is typical for neural networks, having the symbolic component extract rules from that limited data, consolidate this knowledge, and then send it back to the neural network for further training. The rules guide the learning, reducing the need for data quantity.

To be sure, the possible approaches listed above are somewhat compatible with one another, and the choice is not zero-sum.

Building, for example, autonomous aircraft – exemplified by the Collaborative Combat Aircraft program – could see a blend of Neuro-Symbolic AI and Embodied AI. Such aircrafts’ flights could be governed by software that integrates Neural and Symbolic techniques in the service of controlling a physical object – the aircraft itself. The software would have to essentially learn (potentially with certain hard-coded rules that guide this learning) how to properly manipulate the aircraft’s physical presence in the course of serving some human-defined ends (e.g., conducting aerial reconnaissance). So-called military “robot dogs” are a similar example.

In any event, any one of these possible approaches to AI likely has years of work left to firm up their respective foundations. Their integrations with one another, where possible, are neither guaranteed nor straightforward. The future of AI in defense will therefore require a consistently targeted and long-term focus, never forgetting that no technique by itself is ever likely to be universally applicable.

Vincent Carchidi
Website |  + posts

Vincent Carchidi has a background in defense and policy analysis, specializing in critical and emerging technologies. He is currently a Defense Industry Analyst with Forecast International. He also maintains a background in cognitive science, with an interest in artificial intelligence.

About Vincent Carchidi

Vincent Carchidi has a background in defense and policy analysis, specializing in critical and emerging technologies. He is currently a Defense Industry Analyst with Forecast International. He also maintains a background in cognitive science, with an interest in artificial intelligence.

View all posts by Vincent Carchidi →