When algorithms decide who dies: AI’s role in Middle East conflicts

Business Tech 07-03-2026 | 02:43

When algorithms decide who dies: AI’s role in Middle East conflicts

Growing evidence points to artificial intelligence shaping lethal operations from Gaza to Iran, raising ethical and political alarms globally.
When algorithms decide who dies: AI’s role in Middle East conflicts
Illustrative image. (AI)
Smaller Bigger

Modern wars are no longer fought solely with planes, missiles, and traditional espionage networks; algorithms have now entered the core of the decision to kill. In the Middle East, where technology intersects with open conflicts and cross-border assassinations, a troubling question arises: to what extent has artificial intelligence become a partner in target selection? This question grows more pressing with accumulating reports and investigations highlighting the increasing role of intelligent systems in building an Israeli target bank from Gaza and Lebanon all the way to Iran.

 

 

The danger lies not only in the development of technical means but in what this development implies ethically and humanly. When an algorithm reduces human life to a series of data, and suspicion becomes a numerical equation, the line between surveillance and killing grows narrower than ever. At that point, the questions arise: Who decides? How do they decide? And who is accountable when the machine errs—or when the decision is delegated under the guise of human oversight?

 

The machine approaches the decision to kill

From Gaza to Lebanon and on to Iran, one question rises to the forefront of modern Israeli warfare: Has artificial intelligence become a direct partner in the decision to kill, rather than merely an aid in gathering information? This question has gained renewed urgency following reports and accusations that Israel is using AI systems to select targets within Iran, amid growing concerns about a decline in actual human oversight over lethal decisions.

 

 

In April 2024, journalistic investigations sparked widespread controversy by revealing the Israeli military’s use of the Lavender system to generate extensive lists of assassination targets in Gaza. According to these reports, the role of some officers was sometimes reduced to quickly approving the AI’s recommendations, raising serious questions about the true role of human decision-making in bombing and assassination operations.

 

Algorithmic targeting expands regionally

In Lebanon, this pattern became evident during the escalation of Israeli assassination policies in the 2024 war, which targeted prominent field and political leaders in Hezbollah, Hamas, and the Islamic Group, resulting in killings that disrupted the party’s leadership structures. Although available reports do not directly confirm that these operations were selected using artificial intelligence, linking them with the practices revealed in Gaza is no longer a distant hypothesis but a serious political and military possibility. In Iran, recent accusations suggest the same scenario on an even more dangerous scale. The issue now extends beyond the nature of the target to the mechanism of its selection, and whether artificial intelligence plays a decisive role in determining who is monitored, classified, and targeted. While Israel maintains that the final decision remains human, critics argue that this human oversight may serve merely as a formal cover, rather than a real safeguard.

 

 

At the core of this transformation, the issue extends beyond Israel or any single theater of conflict—it concerns the future of warfare itself. When algorithms become part of the assassination system, the danger surpasses mere technical advancement; it signals a shift toward a combat model where responsibility is diminished. This is precisely where the real peril lies: artificial intelligence moving from an analytical tool to an active partner in the decision to kill, in a region already teetering on the edge of perpetual instability.