
Listen to the article
On May 10, 2024, South Africa submitted an urgent request to the International Court of Justice (ICJ) to expand and amend its provisional measures in the ongoing case concerning allegations of genocide in Gaza (South Africa v. Israel). Just two weeks later, on May 24, the ICJ not only reaffirmed its previous provisional measures but also introduced new ones, underscoring the gravity of the situation.
In a parallel legal move, on May 20, the International Criminal Court (ICC) Prosecutor applied for arrest warrants for Israeli Prime Minister Benjamin Netanyahu, Defense Minister Yoav Gallant, as well as leaders of Hamas in connection with the situation in the State of Palestine.
With the world’s leading judicial bodies now focused on the devastating tragedy unfolding in Gaza, the urgent need to examine the ethical implications of this conflict on a broad geographical scale—including emerging AI technologies in warfare—has never been more critical. This article zeroes in on a critical aspect of South Africa’s application: Israel’s extensive and audacious deployment of AI on the battlefield.
Legal Context
In its submission, South Africa emphasizes that the Israeli assault on Rafah escalated the crisis to unprecedented levels, posing an imminent threat to humanitarian aid channels, essential services within Gaza, and the very existence of Palestinians in the region as a group. This prompted South Africa to view the previous measures as inadequate in addressing the situation in Gaza. This stark escalation not only amplifies the existing humanitarian crisis but also introduces fresh facts that highlight the irreparable harm on the rights of Palestinians.
The deployment of Artificial Intelligence (AI) technologies by Israel in the Gaza conflict, exemplified by the creation of “kill lists” and automated targeting systems, raises significant ethical and legal concerns under International Humanitarian Law (IHL), necessitating urgent international oversight and regulation to prevent disproportionate civilian harm and ensure accountability.
AI in Warfare
Philosopher Brian Jack Copeland describes AI as “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience.”
This definition is broad enough to encompass various concepts of what an AI system can do, while also being specific enough to highlight the essential features relevant to our discussion.
In its submission to the ICJ, South Africa has raised the alarm regarding Israel’s utilization of AI to compile “kill lists”. This practice underscores a deeply troubling trend in modern warfare, where AI algorithms are employed to identify and target individuals or groups for lethal action. Alleged “kill lists” raise profound ethical and humanitarian concerns, as they allow arbitrary targeting of individuals. Moreover, the entire process of target identification, targeting, and execution comes with little transparency or accountability.
South Africa’s protest against Israel’s use of AI in this manner serves as a clear reminder of the immediate need for international oversight and regulation of AI technologies in warfare. The unchecked proliferation of such capabilities not only threatens the lives of innocent civilians but also undermines the very fabric of IHL. As the debate surrounding the ethical boundaries of AI in warfare intensifies, robust measures must be put in place to prevent its misuse and safeguard the dignity and rights of all individuals caught in the crossfire (here, here, and here).
Ethics and Legal Compliance
There has been highly alarming investigative media coverage, featuring former Israeli servicepeople who have shared shocking testimonies about these AI practices (here, here). On the opposite side, legal experts appear to downplay these claims, citing a lack of understanding of military and legal complexities.
This article, however, aims to strike a balance between these contrasting viewpoints. It seeks to extract analytical insights that reveal the extent of AI’s role in the catastrophic situation in Gaza and consider its broader implications.
When it comes to the limitations on states’ ability to select their means of warfare, Article 36 of the First Additional Protocol to the Geneva Conventions (API) is a crucial reference point. The provision requires that states evaluate new weapons, means, or methods before deploying them in combat. While there are no precise and mandatory guidelines for the process, the restriction of means and methods of warfare remains a cornerstone of IHL.
It is essential to understand the definitions of “weapon,” “means of warfare,” and “methods of warfare.” The term “weapon” encompasses a variety of capabilities used in combat to cause damage to objects or injury or death to persons. “Means of warfare” extends to military equipment or platforms facilitating military operations. To understand what qualifies as a means of warfare, we must see how the equipment affects a military’s operational capabilities (here, pp. 17-8). Finally, “methods of warfare” refer to military strategies, practices, and tactics used in operations.
Israel’s AI systems themselves are not weapons but rather fundamental components of weaponry systems like the Iron Dome. However, almost every other AI system falls under the definition of means of warfare as they facilitate Israel’s military operations. Weapons, means, and methods are interconnected elements within a single paradigm, and virtually every major operation in Gaza is now influenced to some extent by these systems.
The problem with the reviewing requirement is that Israel is not a party to API, and Article 36 is not considered customary law. This means Israel does not have a legal obligation to conduct weapon reviews. There is an argument that Israel is still required to conduct basic preventive reviews concerning the right to life from a human rights perspective. However, this argument is relatively weak.
Impact on Gaza
Israel’s war in Gaza has plunged the region into an abyss of unprecedented death and destruction. Almost the entire population of Gaza is displaced, and virtually the entire strip now lies in ruins. The term “humanitarian crisis” scarcely captures the enormity of the suffering. What role, if any, has AI played in this catastrophe?
The State of Israel is one of the few leading actors in developing cutting-edge military technology. While AI has been around for a while, recent years have seen big improvements, especially in how it is used in the military. Yet, the rapid evolution of AI-driven systems demands meticulous supervision to ensure ethical and lawful deployment. Much of the destruction and associated civilian casualties in Gaza have been linked to the AI platforms.
Is Human Intelligence Not Enough to Wage a War?
The main reason for incorporating AI systems in military operations is their ability to analyze conflict-related data and enable more efficient targeting swiftly. AI is meant to facilitate precision strikes, especially within technologically advanced militaries, where the seamless integration of high-tech capabilities provides a considerable strategic edge.
High-precision targeting is expected to mitigate civilian casualties. To be clear, civilian casualties are not entirely ruled out under IHL. Instead, the underlying rationale is to minimize as far as possible these losses, referred to as “collateral damage”.
One of Israel’s most notorious uses of AI is the Iron Dome, a defense system designed to counter missile threats. However, AI’s primary role often is digesting vast amounts of data. Additionally, Israel conducts extensive surveillance in Palestinian territories. Palestinians passing through numerous checkpoints and being monitored by countless cameras have their facial features and other biometric data registered into Israeli intelligence databases. This information is later cross-referenced with existing databases for security, law enforcement, and targeting purposes.
AI significantly aids Israeli forces in targeting through platforms like Alchemist, which gathers and transfers data to another platform called Fire Factory. This system analyzes extensive datasets, including historical information on previously authorized strike targets. It then categorizes the data into four main groups:
- Tactical Targets: This category encompasses militants, cells, armored vehicles, and militant headquarters.
- Family Houses: These are residences of Hamas or Islamic Jihad operatives.
- Underground Targets: These include Hamas or Islamic Jihad tunnels.
- Power Targets: This refers to civilian residential buildings.
In parallel, Fire Factory prioritizes and allocates specific strike targets.
The next layer involves an even more sophisticated system called The Gospel, which generates outputs suggesting specific targets, possible munitions, and anticipated collateral damage. This system accomplishes in seconds what previously required numerous analysts several weeks to complete. However, when it comes to making decisions that differentiate between civilian and non-civilian targets in warfare, speed might not always be the most beneficial feature of such a system.
Automation Bias and Its Impact on Military Decision-Making
The operations and strategies informed by AI outputs are often controversial. For example, targeting the so-called Power Targets aims to put civilian pressure on Hamas, hoping that Gazans will turn against the group, making it easier to defeat. This, however, contradicts IHL, which prohibits disproportionate collateral civilian damage. According to IHL, the anticipated military advantage must outweigh the collateral damage to civilians. Importantly, this consideration pertains to the specific military advantage of the concrete attack, not the overall military objective, such as the ultimate defeat of Hamas. It requires a particularly skewed rationale to justify a concrete military advantage—such as the elimination of Hamas operatives—that would render the targeting of residential housing and the enormous collateral damage acceptable. Such attacks would be outright war crimes.
Another AI system, Lavender, is designed to target specific individuals. It uses historical data and surveillance to generate thousands of targets associated with Hamas and Islamic Jihad. But how accurate are these targets? Are all targeted individuals participating in hostilities, making them lawful targets under IHL?
No system can be perfect in this regard. Estimates suggest that about 10% of Lavender’s targets are incorrect. This means that 90% of the targets are supposedly correct. Yet, this depends on how Israel defines its lawful targets. The problem is that Hamas is deeply embedded in the Gaza Strip, with both administrative and military wings.
As a result, police, doctors, and civil society members inadvertently interact with Hamas. Israel, however, uses a broad definition and includes these individuals within the 90%. Israel applies a much looser definition of lawful targets than that under IHL, potentially inflating the initial 10% error rate to a much higher number.
One thing is certain: AI does not produce hard facts. It is not designed for such purposes. AI generates predictions—often high quality and reliable, but not always, especially when it comes to life-and-death decisions in armed conflict. After all, there is no “intelligence” in AI as we traditionally understand it. AI’s performance is entirely dependent on the understanding of the humans who create the system. In real-life military operations, it is the military personnel who decide whether to engage a selected target. However, if an overall military system places too much trust in AI, human approval can become biased. It would take significantly more effort to go against an AI prediction. These systems designed to ease human decision-making, in fact significantly weaken it. This makes compliance with AI predictions almost automatic.
The Broader Implications
The school of thought that attempts to explain the role of AI in the ongoing conflict in Gaza often oversimplifies the grave consequences of the Israeli assault. This perspective tends to focus on the categorization critics use for Israeli AI systems, broadly dubbing them as autonomous weapon systems. They argue that only systems like the Iron Dome can be classified as such, unlike other AI systems such as Fire Factory, Gospel, and Lavender. While this is true, the problem lies elsewhere.
Whether or not they are fully autonomous or are weapons, these AI systems significantly influence military decisions and actions. They shape targeting processes, often with life-or-death consequences, and raise serious questions about accountability and the proportionality of attacks. Thus, the critical concern is how these AI-driven decisions affect the conflict and the lives of those involved, rather than the technical classification of the systems themselves.
While it is true that Israel Defence Forces (IDF) commanders hold the ultimate decision-making authority in offensive operations, the challenge lies in combating what is known as “automation bias.” Even though IDF commanders can disregard recommendations from systems like the Gospel, and every target must ultimately receive authorization from an IDF commander, avoiding automation bias can be difficult, particularly during heightened hostilities.
Automation bias refers to the tendency to overly rely upon or trust AI output. Although AI decision support systems are valuable tools in combat, accelerating the pace of decision-making and offering associated advantages, the risks of automation bias can be significant.
A further rush to implement AI systems into battlefields will likely yield even less predictable outcomes and biased targeting automation, exacerbating civilian casualties.
Can Israel’s Use of AI Be Restrained?
In February 2023, the United States introduced a Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy at the Summit for Responsible AI in the Military Domain (REAIM).
The declaration outlines a series of non-legally binding guidelines detailing best practices for the responsible military use of AI. The guidelines highly emphasize military AI inspection and thorough testing and evaluation at every stage of their lifecycle. They also stress the importance of identifying and preventing unintended behaviors and require that high-risk applications undergo senior-level review.
While Israel participated in the REAIM Summit, it refrained from endorsing the Political Declaration and did not sign the Summit’s communiqué. Therefore, if its commitment to military AI norms is sincere, instead of merely reiterating calls to the Israeli government to protect Palestinian civilian lives, the United States should exert effective pressure on Israel to take immediate and decisive steps toward responsible implementation of AI systems on the battlefield.
Conclusion
Israel’s potential over-reliance on AI-generated outputs poses significant risks to civilian protection and compliance with international humanitarian law. Alchemist and Gospel base their claims on statistical correlations, the nature of which is often only partially understood by decision-makers. Military commanders, who bear the responsibility under IHL for faulty targeting, already struggle with a nuanced understanding of traditional intelligence production cycles, making them ill-equipped to supervise these processes adequately.
The automation of killings and the erosion of human oversight are particularly concerning, especially in light of allegations of genocide and crimes against humanity, which require a widespread and systematic nature of committed acts. Automation involves individuals engaging in actions without fully considering the implications or making conscious decisions. This is compounded by the dehumanization factor, which strips victims of their human status. Targets generated and eliminated by AI systems are technologically dehumanized. AI-enabled targeting systems, with their emphasis on speed and scale, inherently complicate the exercise of morally and legally limited violence.
The current situation on the ground suggests that Israel’s AI tools are not calibrated to minimize harm to civilians and civilian objects. Militaries are developing and deploying these technologies in real-time and real battle scenarios, learning as they go. This can minimize the attacker’s involvement, reducing their risks, and advancing new methods of warfare. However, it does not minimize civilian casualties. And that is not a mere failure or omission but grave breaches of international law and outright commission of international crimes.
To summarize, the reliance on AI systems in military operations raises significant ethical, legal, and practical concerns. Without proper oversight and control, these technologies risk exacerbating civilian harm and violating international humanitarian law, leading to serious international crimes committed by those deploying and operating these systems.
AI systems come with inherent risks that raise questions about individual accountability. IHL focuses on pre-operation legality assessments by the attacking state rather than the results. However, the resulting high death toll raises serious concerns about the usability of AI systems in warfare. Even if AI is not entirely to blame for the large scale of civilian casualties, its utility is seriously jeopardized in the context of Gaza.
AI, in its essence, is not a weapon nor a replacement for human decision-making in the preparation and execution of armed conflicts. It functions as an algorithm for analysis and prediction, devoid of intelligence in the human sense. The peril lies in human overconfidence in AI systems regarding targeting, decision-making, and execution. The danger of AI military proliferation lies not in its capacity to overtake human operations but in shifting responsibility from accountable humans to abstract systems. This process takes a severe toll on human well-being and life and creates a significant impunity gap. Israel stands at the forefront of this shift.
P.S. Armenia’s Perspective
This article takes a broader perspective on an issue that may seem distant from Armenia. However, given the escalating tensions in Russia, Ukraine, Gaza, and Iran, the use of AI in warfare is becoming a global concern—one that could impact any nation if not addressed proactively.
For Armenia, this is especially relevant given its military transformation, including collaborations with India, France, and growing plans with the U.S. As Armenia builds its military doctrine from the ground up, prioritizing compliance with IHL in both offensive and defensive operations is crucial for ensuring robust security. Armenia is all too familiar with the devastating consequences of unchecked violence and the importance of holding states accountable to international humanitarian standards. IHL compliance is not just a legal obligation but a critical component of a strong security framework. This compliance is especially important for Armenia as it seeks to modernize its military while maintaining a commitment to human rights and international law.
Armenia’s focus on IT development makes it imperative to closely monitor AI’s role in modern warfare. Armenia must understand AI’s applications in this context—how it can be used, and how to counter its potential threats. The article does not argue that AI will necessarily be the next big thing Armenia might miss, as was the case with drones. Rather, it emphasizes that overlooking this critical trend, especially amid Armenia’s military transformation, would be a grave oversight.
The war with Azerbaijan has left Armenia with a much longer frontline, challenging its ability to defend this border with limited forces. AI could become a crucial tool in effectively allocating resources and organizing defense strategies along this difficult-to-defend borderline. For instance, AI-driven surveillance systems could enhance border security by detecting and responding to threats more efficiently, much like how India is integrating AI into its Comprehensive Integrated Border Management System (CIBMS) to monitor and secure its borders, using smart fences, sensors, and AI-driven analytics to detect and respond to any breaches.
By the same author
Genocide 2.0: Old Threats, New Delivery?
Davit Khachatryan focuses on the changing dynamics surrounding genocide, particularly in light of modern warfare tactics and weaponry, emphasizing the importance for Armenia to be vigilant in response to emerging threats.
Read moreCyber Operations and International Law
In the ever-evolving landscape of modern warfare, the significance of cyber operations has grown significantly, providing nations with additional means to project power, exert influence and secure strategic advantages. Davit Khachatryan looks at the contemporary nature of conflicts in the digital age and their adherence to the foundational principles established by the UN.
Read moreOpinion
It’s Business as Usual for Aliyev: What Now?
Azerbaijani President Ilham Aliyev's unyielding defiance against international criticism, and the West's failure to impose consequences emboldens his regime, setting a dangerous precedent for authoritarian leaders globally, writes Tatev Hayrapetyan.
Read moreBaku’s Demand for Armenia to Amend Constitution Lacks Legal Credibility
The latest obstacle to a peace agreement between Armenia and Azerbaijan is Baku’s demand that Armenia amend its Constitution, specifically the preamble referencing the Declaration of Independence. Zaven Sargsian argues that Azerbaijan’s argument is deeply flawed.
Read moreOf Republic, Order and Decay
A brief review of classical republican theory and Florentine political history to contextualize the challenges facing the Armenian Republic; not as a mere commentary on current affairs, but as a way of recasting Armenian history without renouncing what makes up our historical identity.
Read moreBuilding Fortress Armenia
Raffi Kassarjian introduces the concept of a national, collective effort to safeguard the independence, democratic principles and complete territorial integrity of the Republic of Armenia where every square cm is protected, with no compromises or territorial concessions of any kind to any external threats or demands.
Read morePolitics
Armenia Looks to the Arab World
To diversify its economic and political ties, Armenia has strengthened relations with the Arab world, including establishing diplomatic ties with Saudi Arabia in 2023 and recognizing Palestine in 2024. Hovhannes Nazaretyan provides a comprehensive overview of these growing connections.
Read moreEuropean Peace Facility’s First Assistance Measure for Armenia
Sossi Tatikyan explores the reasons behind the EU's decision to provide assistance to Armenia through the EPF, the significance of this measure, the diverse perspectives within Armenia and the ensuing hostile reactions from Azerbaijan and Russia.
Read moreArmenia’s Defense Diversification Gains Steam
While India and France have emerged as Armenia’s primary suppliers of military hardware over the past two years, Yerevan is also expanding its defense ties with other countries to enhance its security capabilities.
Read moreAzerbaijan-Israel Relations: Implications for Armenia
The deepening alliance between Azerbaijan and Israel carries significant implications for Armenia, notably Azerbaijan's considerable acquisition of advanced Israeli weaponry which it has extensively employed against Armenian forces in the past decade.
Read moreEthnic Cleansing, Genocide or Displacement?
This article explores the most accurate term to describe the de-Armenization of Nagorno-Karabakh by comparing various perspectives and examining the legal and political applicability of these terms.
Read more