Rise of the killer robots: The future of war

Unthinkable: Autonomous weapons allow us to kill without a conscience. Atrocities are inevitable

The aftermath of the drone strike  ordered by US president Donald Trump  which killed Iranian general Qassem Suleimani.  Photograph: Iraqi Prime Minister’s Press Office via The New York Times
The aftermath of the drone strike ordered by US president Donald Trump which killed Iranian general Qassem Suleimani. Photograph: Iraqi Prime Minister’s Press Office via The New York Times

The recent tit-for-tat missile strikes between US and Iran show how war has changed in the 21st century. Technology has brought new capabilities for killing at a distance, and what we are seeing today with long-range, so-called “precision missiles” is a harbinger of the next generation of warheads.

Autonomous weaponry and “killer robots” sound like the stuff of science fiction but various governments including the US and Russia are investing heavily in their development.

Turkey has teamed up with a defence contractor to deploy kamikaze drones with biometric facial recognition to the Syrian border this year, while the Israeli-developed Harpy “loitering munition” – which hangs about in the sky looking for an unrecognised radar signal to strike – has been sold to several countries including India and China.

For cloud computing expert Laura Nolan, this issue became personal in early 2018 when, while working for Google, she discovered the tech giant had secretively signed up to the US military's artificial intelligence project Maven.

READ MORE

"I was completely taken aback because I still thought of Google as a company was about organising information and trying to empower people by giving them the ability to find things out," the Dubliner recalls. Also, Google had said publicly it would not get involved in military work – unlike, for example, Microsoft and Amazon, "two tech companies that are up to their necks in this [military] stuff".

Laura Nolan: ‘We don’t normally give machines decision making power over human beings.’
Laura Nolan: ‘We don’t normally give machines decision making power over human beings.’

Nolan wasn’t alone in Google in feeling betrayed. More than 3,000 Google employees signed a protest letter, forcing the company to terminate its contract with the Pentagon. Nolan had by then resolved to take up employment elsewhere, while also becoming involved in the Campaign to Stop Killer Robots.

She is now lobbying, as a member of the International Committee for Robot Arms Control, for a United Nations treaty to ban autonomous weapons. These are not standard missiles – where humans are notionally in control – but weapons over which technology has decision-making power.

There are obvious ethical problems with their use. Putting robots on the frontline creates a lower threshold for war. It mandates a kind of unaccountable killing as no computer can be hauled before a court. But there is also a major technological problem.

Based on our current knowledge of artificial intelligence, says Nolan, creating an autonomous killing machine that targets effectively and without error “is an impossible development task”. This means atrocities are inevitable.

She explains further as this week’s Unthinkable guest.

Why object to autonomous weapons when we have already handed over independence in other areas to technology?

Laura Nolan: “I would argue with that. I don’t think it’s fair to say robots control aspects of society. You can use robots to carry out some extremely well defined and innocuous functions. We don’t normally give machines decision making power over human beings.”

What are the main ethical problems with killer robots?

“First off, there’s the core moral question: Is it okay to allow a robot to be in the situation of being able to make that kill decision? We don’t have a machine or algorithm that can have any sort of appreciation of what it is to be a person and the gravity of that decision to take a life. A lot of people see that as inherently not okay.

“One of the bigger problems is lowering the barrier for states going to war. We’re already seeing this to an extent with non-autonomous drone warfare. Autonomous warfare is a step beyond that.”

Is there a concern that such weapons will go out of control?

“My core expertise is software reliability. My day-to-day job is working with systems which have multiple parts working together.

“There is huge complexity in these systems. I think it’s almost certain that we will see accidents where civilians are killed or militaries will see their friendly soldiers killed, or we will see damage to civilian installations or terrible scenarios like nuclear plants being damaged.

“The analogy I like to use is: Think about autonomous self-driving cars. We’ve had a lot of very large, well funded, well-staffed technology companies working on these now for probably 10 years or more, and we don’t know at this point if we are ever going to solve that problem to the extent that we would allow them on our roads.

“For autonomous cars, the problem is ‘Go from A to B and don’t get involved in a collision’. This is reasonably straight forward ... Now look at the autonomous weapons problem.

“If you just want to send a missile to a building you don’t need autonomous weapons. The reason you want autonomy is that want these operating out of human communications range and independently to target things that are not static targets. The whole point of these things is that they can hunt people and hunt things that move around.

“Even when you’ve identified your combatants or legitimate targets you have to think about proportionality. You have to think what collateral damage could arise and decide whether that is acceptable given how much military advantage is going to be gained from a strike. No one has really been able to write down strict rules for that because it requires quite subtle human judgment.

“You end up with this technology-driven moral slippage. The other thing is you don’t have a war scenario on demand to test these things.”

How is the campaign for a ban going?

“We think there should a strong assertion that there should be human control over weapons – a human being that is actively involved in targeting and choosing to strike.

“At the CCW (the UN’s committee on conventional weapons) there have been a few states standing in the way, in particular Russia, but there are another few states that are not friendly towards a treaty.”

Has Ireland been supportive?

“Ireland has not come out and strongly supported a treaty; they have been calling for a political declaration first ... It would be great to have Ireland pushing harder, it really would.”

The Government’s stance on autonomous weapons

The Department of Foreign Affairs says “Ireland regards the area of lethal autonomous weapons systems as one of the most pressing issues on the international disarmament agenda”.

The development of such weapons raises “serious ethical, moral and legal questions”, it adds.

“In the short term, we see a political declaration as the outcome most likely to secure a broad agreement at international level. But longer term, we also see value in developing an internationally-agreed legal instrument designed to ensure human control over autonomous weapon systems.”