In a remarkable story published over the weekend, The Washington Post [https://www.washingtonpost.com/technology/2024/12/29/ai-israel-war-gaza-idf/] examined the role of artificial intelligence in developing targets in Gaza for the Israel Defense Force. The IDF delights in their super-sophisticated kill programs, treating them as a fun set of new toys, with names like "The Gospel," "The Alchemist," and "Depth of Wisdom." The Post quotes skeptics in the IDF expressing misgivings about the "religious" attitude toward AI within the organization, and as the war has continued, the software has evidently grown more and more independent of human input and has provided much of the basis for the continuing slaughter, which has taken about 45,000 lives thus far and flattened much or most of the Gaza Strip.
The leader of Israel's military AI program—a man whose faith in AI is described as “religious”—is Yossi Sariel, the commander of Unit 8200, which develops targets for the IDF and has transitioned to an almost pure AI approach. Sariel developed it working with professors at the National Defense University in DC, where such Israeli military leaders as Benny Gantz and Herzl Halevi have also worked and studied. The US military is certainly working on the same technologies.
"In Sariel’s expansive vision," reports the Post, "AI would touch all aspects of defense, in both peacetime and war. By collecting digital trails, armies could build advanced 'target banks' with names, locations and behavior patterns of thousands of suspects. These technologies could replace 80 percent of intelligence analysts that specialize in foreign languages in just five years."
The technology quickly, or almost instantly, correlates an enormous amount of data; the programs can interpret "miniscule changes" in satellite imagery or cell phone pings to indicate a tunnel entrance or a rocket launcher or a militant. The program called “Lavender” is concerned specifically with humans, and assigns to each potential casualty a probability that he’s a member of a militant group. You get a grade, based on where you are, who you're communicating with, your name, age, gender. Then your family gets a missile through the window.
The IDF uses the Gospel and related programs to directly set levels of civilian casualties they’ll accept, which is an amazingly clear window onto how the slaughter has proceeded. "In 2014, the IDF’s acceptable civilian casualty ratio was one civilian for a high-level terrorist," the Post reports. "'In the Gaza war, the number has grown to about 15 civilians for one low-level Hamas member and 'exponentially higher for mid- and high-level members,' according to the Israeli human rights organization Breaking the Silence, citing numerous testimonies from IDF soldiers. The New York Times reported the number as 20 earlier this week."
Killing 2250 low-level fighters at 20 civilian casualties per, yields that round 45,000, and renders the death of 20,000 children acceptable to the algorithm and hence real in the world. This is the most grotesque calculation ever performed on a computer. You can set the program to “genocide” and launch your bombs accordingly. As Steven Feldstein of the Carnegie endowment puts it blandly in the Post, "the end result is a higher death count than was previously imagined in war." But what 20-to-1 Israel is doing in Gaza now isn’t exactly a war. It's unilateral destruction of a population.
These formulas encode an unfolding algorithmic genocide, in which the people making the decisions to kill can defer responsibility to the programs. That’s very likely in the long run to be a defense at trial for crimes against humanity: the program made me do it. "I was just following orders" is the classic defense to the charge of genocide, or child murder in a military context. "I was just doing what the algorithm told me to do" is likely to be the next one. Mistakes were made, but nobody really understands why or by whom. Sorry about the annihilation of your family.
And there will, likely, eventually be trials for crimes against humanity implicating AI. Already a year ago, when South Africa brought genocide charges against Israel in the Hague, they pointedly asked the question whether targeting decisions were made by software. If so, whom could we hope to hold responsible for tens of thousands of deaths? And how we can back off the precipice we're inching toward, in which algorithms kill millions of people and no one seems to have done anything but nod along to the deliverances of their program, their "Gospel"?
Meanwhile, take a few moments to contemplate the potentially world-annihilating effects of the National Defense University.
—Follow Crispin Sartwell on X and Bluesky: @CrispinSartwell