Among the technological faithful of Silicon Valley, there is an ongoing debate about the risks inherent in the development of artificial intelligence. On one side are the so-called “doomers”, who believe that AI presents an urgent existential threat to humanity; without rigorous ethical, technological, and legal strictures in place, they argue, we face a non-trivial possibility of machine intelligence wiping humanity from the face of the earth. This Terminator-esque scenario is palpably absurd, but that doesn’t prevent people who identify as rationalists giving it a great deal of serious thought.
There are multidisciplinary research units at Oxford and Cambridge devoted to thinking through these prospects. Last year a Bay Area non-profit called the Center for AI Safety released a statement on AI risk; the statement, signed by the likes of Bill Gates and OpenAI chief executive Sam Altman, announced that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
On the other side of the debate are “AI accelerationists”, who believe that such fears are hysterical, and that the great benefits this technology will bring to society are worth whatever trivial risk might be inherent in developing it. Fortune favours the brave, they say, and a technological utopia of endless ease and abundance will be our reward.
I’m neither a technologist nor a futurist, but both of these prospects seem wilfully simplistic and implausible. What strikes me as strange is that each of them, and the debate between their respective proponents, get so much attention at a time when artificial intelligence is already being put to nightmarish and destructive uses, about which Silicon Valley’s prophets of AI doom apparently have very little to say.
Cutting off family members: ‘It had never occurred to me that you could grieve somebody who was still alive’
Great places to eat in Ireland when it’s date night
Former army baby Sam Prendergast not afraid to stand his ground in Ireland senior squad
‘I know what happened in that room’: the full story of the Conor McGregor case
Earlier this month, the left-wing Israeli magazine +972 published a detailed report about the Israel Defence Forces’s use of AI systems to generate targets among the Gazan population. Drawing on data gathered from automated mass surveillance of the area’s 2.3 million residents, Israel’s “Lavender” AI uses its internal algorithms to assess the likelihood of each person being a member of Hamas’s military wing, or of the militant group Palestinian Islamic Jihad. Subjects who rank highly enough on the system’s scale of 1 to 100 are deemed targets for assassination, a process which is itself automated. In this way, according to the report, the system identified some 37,000 potential targets.
As with all AI systems, the value to its owner is largely in its minimisation of human labour. The report quotes IDF intelligence officers on the frictionless ease with which such a software facilitates mass killing.
It is easier to have faith in a statistical mechanism, said one officer, than in a grieving soldier whose loved ones had been murdered in the massacres of October 7th: “The machine did it coldly. And that made it easier.” Another spoke of the essentially negligible role played by humans in the process of selecting and killing suspected enemy combatants: “I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added-value as a human, apart from being a stamp of approval. It saved a lot of time.” (The Israel Defence Force denies many of the claims in the report. It has said it “does not use an artificial intelligence system that identifies terrorist operatives” and that Lavender is not an AI system but “simply a database whose purpose is to cross-reference intelligence sources”. It also said that “the IDF reviews targets before strikes and chooses the proper munition in accordance with operational and humanitarian considerations.”)
The most horrifying aspect of the report, though, is its detailing of the guidelines on collateral damage; these inhuman calculations were made not by an AI system, but by the men and women of the IDF. During the early weeks of Israel’s slaughter in Gaza, for every low-ranking militant, it was permissible to kill 15 to 20 civilians; in attacks on high-ranking Hamas targets, the number was more than 100. It should be noted here that, even at the lower end of this scale, if 20 civilians were to be killed for each of the 37,000 military targets identified by the AI, the number of civilian deaths would be close to three-quarters of a million people, or about a third of Gaza’s population. In any attempt to reckon with the question of whether Israel is in the process of carrying out a genocide, it is worth bearing this calculation in mind.
[ AI is generating ‘100 bombing targets a day’ for the Israeli army in GazaOpens in new window ]
Attacks on low-ranking targets were made using “dumb bombs” –– unguided missiles whose lack of precision inevitably meant the killing and maiming of civilians, most often women and children. “You don’t want to waste expensive bombs on unimportant people,” as one intelligence source put it. “It’s very expensive for the country and there’s a shortage [of those bombs].”
The manner in which the IDF’s AI systems calculate the value of Palestinian lives is mirrored, in this way, by the language of Israeli officials. The machine does it coldly, as the intelligence officer quoted in +972′s report put it, but it acts in service of a machinery of state that is no less chillingly inhuman. The AI doomers of the tech world, whose worldview is formed of equal parts narrow rationalism and magical thinking, seem interested only in abstractions. They have nothing to say about the actually existing AI-assisted apocalypse being unleashed by Israel on the people of Gaza; they are incapable of thinking about AI as anything other than a kind of divine force unleashed upon the world by Promethean computer scientists.
But AI is a tool, like any technology, wielded by the powerful to serve their interests. If it obliterates entire sections of the employment economy, that is because the powerful are using it to reduce their labour costs. And if it facilitates the automation of death, it is because the powerful are using it to advance a project of colonial subjugation and extermination. It represents not a radical break with human history, in other words, but a radical intensification of historical business as usual: the rich getting richer, the poor getting poorer, and the powerless getting crushed by a machinery of power that is increasingly sophisticated, and enduringly barbaric.
- Listen to our Inside Politics Podcast for the latest analysis and chat
- Sign up for push alerts and have the best news, analysis and comment delivered directly to your phone
- Find The Irish Times on WhatsApp and stay up to date