by Greg Giaquinto, Forecast International.
The U.S. Air Force’s Operational Awareness Technology project is a multifaceted effort that will continue to be supported by the defense budget even in the face of pressures to lower costs. FI is projecting that the Air Force will allocate around $215 million toward this project over the next 10 years, with some $39 million to be spent from FY15 through FY16 alone. Driving these expenditures is the Air Force’s push for a network-centric, collaborative intelligence analysis capability.
The Operational Awareness Technology project conducts research into technology that enables the fusion of multi-intelligence sources in order to more accurately identify and track objects and improve situational awareness. This technology could be employed to better anticipate battlespace threats – in the air, on the ground, and in space – and would also provide enhanced awareness of cybersecurity threats.
Another aspect of this project is the development of advanced exploitation technologies that help maximize the intelligence gained from U.S. adversaries. Special areas of study include spectral detection and geo-location, signal recognition and analysis, and data tagging, tracking, and tracing via the insertion of secure, imperceptible signal embedding.
The Operational Awareness Technology project is broken out into the following three subprojects:
Multi-Source Fusion Technologies. Develops higher-level fusion and enables text information / knowledge base technologies “to achieve situational awareness and understanding at all command levels for dynamic planning, assessment, and execution processes.”
Exploitation Technologies. Develops digital information exploitation technologies for electronic communications to increase the accuracy, correlation, and timeliness of the information obtained.
Next Generation Command Technologies. Develops modeling and simulation technologies for the “next generation of planning, assessment, and execution environments.”
Among upcoming efforts, this project will investigate the use of motion detection/tracking and content-based imagery retrieval for detecting objects of interest. It will also apply advanced reasoning techniques to Multi-INT data, including SIGINT and space surveillance network data, to assess space objects and determine the significance of any activity.