Automation without Awareness: Integrating AI with Military Operations
The advancement of Automated Intelligence (AI), opened the possibility for companies and major industries to use AI to maximize profits and create major breakthrough changes, However this enthusiasm for AI was short-lived once issues became widely reported. Ranging from a compilation of humorous AI drive through deliveries on social media to openly encouraging suicide victims to take their own lives, AI is unpredictable and its integration with daily life is raising serious ethical concerns in almost every industry. In regard to the military, the United States is beginning to integrate AI into more complex decision-making. This article explores the potential drawback in using AI to make military decisions.
“The Board concluded that the
autonomous operation of the Patriot battery was a contributory factor”
On March 22, 2003, during the Iraq invasion, American soldiers fired Patriot interception missiles at what was believed to be an Iraqi anti-radiation missile. It was discovered that the “missile” was actually a UK Tornado fighter jet, and the two pilots died instantly. An investigation revealed a variety of factors led to the unfortunate incident:
Patriot System Anti-Radiation Missile Classification
Patriot Anti-Radiadtion Rules of Engagement
Patriot Firing Doctrine and Training
Autonomous Patriot Battery Operation
Patriot IFF Procedures
ZG710’s IFF system
Aircraft Routing and Airspace Control Measures
Instructions.
Global Patriot Missile | Raytheon
In analyzing the documentation, the automated system did its job correctly in identifying the ZG710 as a potential threat to coalition forces. The investigation revealed that “patriot crews are trained to react quickly, engage early, and to trust the Patriot system.” In addition, the Patriot crew responsible did not have the widest operational picture possible to identify the ZG710, due to a communication issue. The ZG710’s Identification, Friend or Foe (IFF) system was not functioning at the time, and as the aircraft came into range, the Patriot classified it as a hostile missile. In addition, the Patriot crew and system were only trained to target threats, not identify potential friendly air tracks or potential false positives. As stated within the report, the system identified the ZG710 through a broad category. Generic programming led to the unfortunate misclassifying and death of two Royal Air Force pilots.
The goal of this analysis isn’t to blame the crew or cast doubt to the technological advancements made by the West, but rather to look at the dangers of over relying on AI to make military decisions. Modern military forces stress the need for critical thinking and advanced, complex decision-making processes. In using AI to make these decisions, it removes a commander’s ability to think critically and stick to simple decisions. While everything is very simple in war, the simplest thing is difficult.
Carl Von Clausewitz in AZQUOTES | Source
Smoke and Fire from Gaza | Hatem Moussa AP
Following Hamas’ surprise attack against Israel on October 7th, 2023, Israel responded in brute force and took over Gaza. The amount of civilian casualties and mismanagement has led many in the international community to accuse the IDF of committing a genocide. The following month, in November, a 972-page document published by Magazine and Local Call labeled the IDF as a “mass assassination factory.” It revealed that the IDF was using AI, more specifically an AI system called “Gospel,” and that it made automatic recommendations to bomb private residences suspected of housing Hamas operatives. Following the accidental bombing of World Kitchen volunteers, it was revealed that IDF used another AI system called “Lavender,” and a deeper analysis showed that the IDF was using this AI to classify potential threats. The IDF kept approving the kill list without any thorough verification if these decisions were correct. Some of these decisions were made in 20 seconds.
“ In addition, delegation of escalation control to machines could mean that minor tactical missteps or accidents that are part and parcel of military operations in the chaos and fog of war, including fratricide, civilian casualties, and poor military judgment, could spiral out of control and reach catastrophic proportions before humans have time to intervene.”
A large concern that American scholars have with integrating AI is that it will outpace human advancement. This is a double-edged sword for militaries. While militaries focus on outpacing their adversaries, AI can assist in research and development, but AI also poses a tremendous risk. The ambiguous thought process behind complex military decision-making is far beyond AI’s reach, and AI might never be able to grasp the complexity of war. As noted in the two case examples above, automated decision-making speeds the process of making a decision, but the AI did not verify itself; instead relied on simple inputs from human error and produced simple answers.
Image on US Mission to the Organization for Security an Cooperation in Europe | The United States is fully committed to the responsible use of AI in the military domain | OSCE
Analysis
AI presents a significant problem to militaries that utilize it without verification. One of the biggest dilemmas that modern militaries face is complex decision-making under significant time constraints. The use of AI can make decisions at a fraction of the cost, but in doing so, it is prone to egregious mistakes. Militaries are incentivized to update their technology and procedures faster than near-peer adversaries. AI can significantly increase updates and push human limits. An article titled The Machine Beneath: Implications of Artifical Intelligence in Strategic Decisionmaking written by Matthew Price, Stephen Walker, and Will Wiley, unveiled significant issues and concerns AI presents.
While AI presents major issues, it cannot be dismissed as a major asset to militaries. The speed and ability to enhance military operations significantly outpace human constraints, and most armies are beginning to embrace this change at a rapid pace. AI can enhance military operations, but it cannot replace them. AI’s ability to make decisions is absent from the entire operational picture.
The danger of AI is not in its intelligence, but in military and policy officials surrendering complete decision-making processes to it. AI can produce data and react without fatigue, but it cannot calculate uncertainty, a key factor in military decision-making. War is ambiguous, with multiple scholars arguing the importance of complex problems. When decision-making is outsourced to AI, accuracy is sacrificed for speed. Orders and recommendations can be executed by AI, but it cannot comprehend the decision and consequences of each suggestion.