Israel accused of using AI to target thousands in Gaza, as killer algorithms outpace international law

Per leggere l’articolo tradotto in italiano clicca l’icona blu google translate  (la quarta da sinistra in fondo all’articolo)   . Per un uso professionale e/o di studio raccomandiamo  di fare riferimento al testo originale.

 

 

Natasha Karner, RMIT University

The Israeli army used a new artificial intelligence (AI) system to generate lists of tens of thousands of human targets for potential airstrikes in Gaza, according to a report published last week. The report comes from the nonprofit outlet +972 Magazine, which is run by Israeli and Palestinian journalists.

The report cites interviews with six unnamed sources in Israeli intelligence. The sources claim the system, known as Lavender, was used with other AI systems to target and assassinate suspected militants – many in their own homes – causing large numbers of civilian casualties.

According to another report in the Guardian, based on the same sources as the +972 report, one intelligence officer said the system “made it easier” to carry out large numbers of strikes, because “the machine did it coldly”.

As militaries around the world race to use AI, these reports show us what it may look like: machine-speed warfare with limited accuracy and little human oversight, with a high cost for civilians.

Military AI in Gaza is not new

The Israeli Defence Force denies many of the claims in these reports. In a statement to the Guardian, it said it “does not use an artificial intelligence system that identifies terrorist operatives”. It said Lavender is not an AI system but “simply a database whose purpose is to cross-reference intelligence sources”.

But in 2021, the Jerusalem Post reported an intelligence official saying Israel had just won its first “AI war” – an earlier conflict with Hamas – using a number of machine learning systems to sift through data and produce targets. In the same year a book called The Human–Machine Team, which outlined a vision of AI-powered warfare, was published under a pseudonym by an author recently revealed to be the head of a key Israeli clandestine intelligence unit.

Last year, another +972 report said Israel also uses an AI system called Habsora to identify potential militant buildings and facilities to bomb. According the report, Habsora generates targets “almost automatically”, and one former intelligence officer described it as “a mass assassination factory”.

The recent +972 report also claims a third system, called Where’s Daddy?, monitors targets identified by Lavender and alerts the military when they return home, often to their family.

Death by algorithm

Several countries are turning to algorithms in search of a military edge. The US military’s Project Maven supplies AI targeting that has been used in the Middle East and Ukraine. China too is rushing to develop AI systems to analyse data, select targets, and aid in decision-making.

Proponents of military AI argue it will enable faster decision-making, greater accuracy and reduced casualties in warfare.

Yet last year, Middle East Eye reported an Israeli intelligence office said having a human review every AI-generated target in Gaza was “not feasible at all”. Another source told +972 they personally “would invest 20 seconds for each target” being merely a “rubber stamp” of approval.

The Israeli Defence Force response to the most recent report says “analysts must conduct independent examinations, in which they verify that the identified targets meet the relevant definitions in accordance with international law”.

Leggi tutto