
Introduction
Artificial Intelligence (AI) benefits from being a general-purpose technology, applicable to any task or problem that a human can identify. It is a versatile technology with a broad scope – a scope continuously exploited by defense organizations.
The unfortunate corollary is that, for whatever its benefits, AI being generally applicable entails that its shortcomings are generally shared across those same tasks or problems.
The devil is always in the details. For some applications, seemingly minor techniques can make the difference between suitable and unsuitable. For others, more fundamental innovations are needed.
The Army’s Cyber Collaborative Research Alliance
U.S. Army cybersecurity is one such bundle of applications requiring fundamental research. Indeed, the service continues to devote RDT&E funding to the theoretical foundations of cyber science in the interest of Army network protection through its Cyber Collaborative Research Alliance (CCRA) program. The Army funds the CCRA program through the Cyber Collaborative Research Alliance program.
FY24 | FY25 | FY26 |
5.459 | 5.525 | 5.463 |
CB5: Cyber Collaborative Research Alliance
Source: U.S. Department of Defense Fiscal Year 2026 Budget Estimates (Request), U.S. Army, RDT&E Justification Book, Vol. 1a, BA1, June 2025
Since FY25 and continuing with the service’s FY26 plans, attention is turning toward the edges of machine learning (ML) in cybersecurity. Whereas the use of ML algorithms for cyber detection and adversarial resilience is practically a given, CCRA has instead turned its attention to the manipulation of ML algorithms to guard against novel cyber threats.
The Army’s CCRA program carries out this research through a competitively selected consortium called the Cyber Security Collaborative Research Alliance (CRA). CRAs are partnerships between Army labs, private industry, and academia to transition innovative technologies to warfighters for the Army Future Command.
In a nutshell: these CRAs attempt to bridge the – often wide – chasm between a given technology and Army adoption and deployment. CRAs are funded under Army Collaborative Research and Tech Alliances:
FY24 | FY25 | FY26 |
58.118 | 57.650 | 29.659 |
AB7: Army Collaborative Research and Tech Alliances
Source: U.S. Department of Defense Fiscal Year 2026 Budget Estimates (Request), U.S. Army, RDT&E Justification Book, Vol. 1a, BA1, June 2025
Cybersecurity Meets the Edges of Machine Learning
The consortium ultimately aims to “significantly decrease the adversary’s return on investment” when considering a cyber attack on Army networks and minimize Army network performance degradation in the event of an attack.
The service’s FY26 requested funding for the CCRA will go entirely towards the continuation of its Adversarial-resilient Cyber Effects for Decision Dominance (“Adversarial-resilient”) project. Among its goals is to explore the “manipulation of machine learning based algorithms” and to “examine impact of uncertainties and incomplete information in machine learning algorithms for cyber deception and network intrusion detection.”
Remember: ML algorithms have common deficiencies. These deficiencies are application-agnostic, making their presence in Army network security a vulnerability in need of patching. Specifically, ML algorithms’ inability to retain robust performance in the face of novel data (in this case, novel attack vectors) is the current focus of Adversarial-resilient.
Adversarial-resilient’s FY26 plans include pursuing this research “to minimize the need for continual retraining” while maintaining resilience against adversarial cyber attacks. The plans also include an effort to “support classifier training in simulated environments” to enable transfer from simulations to actual deployment, “with minimal labeled data from captured network packets in the target environment.”
Each of these relates to broader matters of foundational ML/AI research, often neglected. The reference to “minimize the need for continual retraining” correctly implies that ML algorithms primarily acquire new capabilities by re-running the training process previously undergone before deployment (whether simulated deployment or within the target environment), presumably with added training data.
This re-training is inefficient: the algorithms must be taken out of operation to deal with a new, potentially urgent threat. It is also uncertain: re-training often leads the algorithms to improve on some measures, perhaps where data is more abundant and well-curated, and degrade unexpectedly on others. There is no current metric by which these improvements and degradations can be confidently or precisely predicted (hence, the need for the investigations into cyber science’s theoretical foundations).
The reference to “minimal labeled data” is a corollary to the need for reduced re-training, indicating that the algorithms in question should be able to deal with some degree of arbitrariness in their deployments. This, in turn, requires an ability to deal with novelty (novel threats). This capability cannot rely on data labeled in advance by humans, as these annotations merely constrain the algorithm to known and/or existing attack vectors.
A possible analogy between commercial AI and defense AI for cybersecurity stands out here. Transformer-based Large Language Models of current commercial enthusiasm, while lacking the ability to self-verify and self-check their outputs, are nonetheless capable of flexibly generating a wide breadth of outputs; they are less constrained in their generated outputs than other systems, albeit without the accompanying accuracy of narrower systems.
It is conceivable that the CCRA will attempt to develop techniques for the automated generation of possible attack vectors (i.e., those not identified by humans) with transformers or some other architecture and then train separate algorithms for cybersecurity on those outputs, potentially enhancing the latter’s robustness to novelty by widening the scope of their application. This would rely on developing a technique to verify the first model’s outputs.
The Upshot
In any event, the Army is affirming its interest in pursuing foundational ML research in collaboration with university and industry partners.
The FY26 Request’s drop from FY25 may seem at odds, but this is due to several realignments from the University and Industry Research Centers program to the Electronic Warfare Basic Research Program (specifically, the Army Agile University Tech Collaborative Alliances project), where funding is roughly consistent with previous trends:
FY24 | FY25 | FY26 |
– | – | 57.892 |
A62: Army Agile University Tech Collaborative Alliance
Source: U.S. Department of Defense Fiscal Year 2026 Budget Estimates (Request), U.S. Army, RDT&E Justification Book, Vol. 1a, BA1, June 2025
U.S. Army RDT&E for the CCRA and its programs supporting research and technology alliances with university and industry partners is expected to continue in the immediate years ahead. Programs like CCRA will continue to receive relatively stable funding as the fruits of ML, and possibly hybrid AI techniques, are developed. ML algorithms have long since proven their worth in this domain. The goal for the U.S. Army is now to ensure that its fundamental shortcomings are dealt with in ways that preserve the capabilities they provide.
Vincent Carchidi has a background in defense and policy analysis, specializing in critical and emerging technologies. He is currently a Defense Industry Analyst with Forecast International. He also maintains a background in cognitive science, with an interest in artificial intelligence.