DARPA Funds AI for Enhanced Battlefield Decision-Making

The U.S. Defense Advanced Research Projects Agency (DARPA) has allocated substantial funding for research aimed at incorporating artificial intelligence (AI) into strategic decision-making on the battlefield. Under the banner of the Strategic Chaos Engine for Planning, Tactics, Experimentation and Resiliency (SCEPTER) project, this military research agency is dedicated to cultivating AI technology capable of piercing through the obscurities of warfare. DARPA envisions that advanced AI models will streamline the intricate aspects of contemporary warfare by discerning critical information from a sea of irrelevant data, thus expediting real-time combat judgments.

Avi Goldfarb, who holds the Rotman chair in artificial intelligence and healthcare at the University of Toronto’s Rotman School of Management, and is also the chief data scientist at the Creative Destruction Lab, emphasized the importance of a tool to fill informational gaps in various military contexts, including the chaos of battle. Goldfarb, who is not affiliated with the SCEPTER initiative, pointed out the necessity of acknowledging the limitations of predictive AI systems. He underscored that AI’s role is not to exercise judgment or make decisions, but rather to provide information as guidance for human decision-makers. Furthermore, he cautioned that adversaries may attempt to manipulate the accuracy of AI-generated information, making full automation challenging in some scenarios.

AI assistance holds particular promise for operations spanning land, sea, air, space, or cyberspace. DARPA’s SCEPTER project seeks to advance AI-powered war simulations beyond existing techniques. By melding human expertise with AI’s computational capabilities, DARPA aims to reduce the computational demands of military simulations, potentially resulting in enhanced and more expedient war strategies.

Three companies, namely Charles River Analytics, Parallax Advanced Research, and BAE Systems, have secured funding through the SCEPTER project. Machine learning (ML) emerges as a pivotal area where AI could augment battlefield decision-making. ML involves presenting computers with examples, such as historical wartime scenarios, allowing them to derive predictions and “learn” from the provided data. Goldfarb asserted that the most significant advancements in AI have occurred in the realm of machine learning.

Toby Walsh, the chief scientist at the University of New South Wales AI Institute in Australia and an advocate for imposing limits on autonomous weapons, concurred but stressed that machine learning alone would be insufficient. He pointed out that battles rarely repeat themselves, with adversaries quick to adapt and avoid repeating mistakes. Therefore, he advocated for combining machine learning with other AI methodologies.

The SCEPTER project also places emphasis on enhancing heuristics, which serve as rapid but imperfect solutions to complex problems, and causal AI, capable of inferring cause and effect relationships to approximate human decision-making. Nevertheless, Goldfarb emphasized that even the most cutting-edge AI technologies have their limitations, and human intervention will always be required for the final decision-making process.

It’s worth noting that the United States is not the sole nation pinning its hopes on AI to enhance wartime decision-making. According to Toby Walsh, China has made its intentions clear by aiming for military and economic supremacy through AI utilization, positioning itself as a competitor to the U.S. in terms of patents and scientific publications.

It’s important to distinguish the SCEPTER project from AI initiatives focused on developing lethal autonomous weapons (LAWs). These LAWs possess the capacity to independently search for and engage targets based on preprogrammed constraints and descriptions. Walsh emphasized the potentially catastrophic consequences of such systems, labeling them as potential instruments of mass destruction. He pointed out that they could lower the barriers to warfare, increase uncertainty regarding the identity of attackers, and pose moral dilemmas, as machines lack moral agency and accountability for their actions in war.

Leave a Comment