The AI apocalypse: will the human race soon be terminated?

Robots won’t destroy their makers, but we must ensure everybody benefits from technology

A scene from Terminator Salvation: “Any apocalyptic vision of AI can be disregarded,” says Prof Luciano Floridi. The worst-case scenario is when the development of AI  benefits the minority.
A scene from Terminator Salvation: “Any apocalyptic vision of AI can be disregarded,” says Prof Luciano Floridi. The worst-case scenario is when the development of AI benefits the minority.

There is a disturbing moment in Stanley Kubrick's 2001: A Space Odyssey when the sentient ship computer Hal suffers a meltdown. Outside, in the depths of space, scientist Frank Poole is cut loose by the machine and sent floating helplessly into the void.

It is a nightmare scenario in the world of Artificial Intelligence (AI); the type of thing many believe might happen when super intelligent machines develop to the point where they become an existential threat to the human race. It would be the supreme paradox of our existence – that we out-invent ourselves.

In the 1960s, British mathematician Irving John Good wrote about how human beings' development of technology would spawn an "intelligence explosion", creating a pattern where machines subsequently create ever-superior versions of themselves. Mankind becomes redundant.

“Thus the first ultra-intelligent machine is the last invention that man need ever make,” Good wrote in a chillingly simple surmise. “It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.”

READ MORE

Worst-case scenario

Good’s views appear in the opening of Luciano Floridi’s essay

The Ethics of Artificial Intelligence

, a treatise on how seriously such worst-case-scenario events should be taken.

The good news is that in Floridi’s version of the future, the world will not disintegrate into a man-versus-machine holocaust – there will be no Arnold Schwarzenegger-style Terminators, no Keanu Reeves-like attempts to destroy the evil, machine-designed Matrix.

"They have zero intelligence. We have no idea how to implement intelligence in any machine – not the kind of intelligence you and I have, or even a dog has. Anything else is speculation," says Floridi, ahead of the publication of his essay in Megatech: Technology in 2050, a collection of papers edited by Daniel Franklin.

A native of Rome, Italy, Floridi is professor of philosophy and ethics of information at the Oxford Internet Institute and a research associate and a fellow in information policy at its Department of Computer Science.

‘Summoning the demon’

In his essay, Floridi discusses the hypothetical doomsday scenario of AI and those who promote it.

Stephen Hawking

, he points out, said the development of full artificial intelligence could “spell the end of the human race”. Tesla chief executive

Elon Musk

suggested “we are summoning the demon”.

These kinds of views are not the mainstream but they do exist and tend to apply to the concept of Good’s “intelligence explosion” or “singularity”. This, explains Floridi, is a moment in time when “the game changes dramatically”, when human beings lose their grip on the development of machines which then take matters into their own (robotic) hands with potentially grim consequences.

“It runs away from humanity,” he says of the theory. “We become a sort of enslaved species or perhaps pets.”

Jokingly, he continues: “Our best chance is to make sure that we don’t treat them badly now so they will remember that we were kind.”

But the question of when man’s creative impulses might precipitate an age in which computers rule the world has no answer. Floridi compares it to asking when an alien might arrive on earth. Such a scenario cannot be discounted but, equally, it cannot attract any degree of certainty.

More trivial

Today’s reality of AI is, as Floridi explains, more trivial. And yet, while there is slim chance of a robot deciding to jettison its human cargo into space, there are more realistic, and more pressing threats that must be considered.

In this context, any earnest focus on achieving the singularity is “a joke and it’s a bad joke now. It’s not funny anymore,” says Floridi.

"What are you worried more about: unemployment, violence in the street, radicalisation and terrorism? We got a long list: environmental disaster, global warming . . . or AI coming and Terminator dominating earth? And I would like to see someone say, oh well, that [the Terminator option] is where the money should go; that's the problem."

Floridi comes into himself when he shifts into ethical gear. He is a total proponent of the role that technology plays – “it’s one of the great things that makes us human” – but he is trying to set the parameters of the debate.

He talks about human damage to coral reefs, the undermining of the foundations of society, fake news and neo-fascism. These are the priorities, he says, and AI could be a “magic, powerful force” to counteract both inequality and environmental concerns.

“We could help with distribution of wealth and inequality by having more going around than at any time before. And we could have serious solutions for the environment through intelligent applications that minimise cost.”

Wealth creation

The worst-case scenario is not the Matrix – it is when the development of AI benefits the minority, the majority “not touched by this new wave of wealth creation. That would be a shame.”

In his essay, Floridi says: “Any apocalyptic vision of AI can be disregarded. The serious risk is not the appearance of some ultra-intelligence, but that we may misuse our digital technologies, to the detriment of a large percentage of humanity and the whole planet.”

About a decade ago, human civilisation passed a point where more data passed between machines than people. Today, this new technology or data environment is a critical element to the consideration of where the future is going and how humans should maintain control.

Floridi likens today’s reality, where complex machines carrying out ever more complex functions, to that of an ocean. In this world, data are the fish and humans are the scuba divers – that is to say, “we will always be immigrants” in an environment where technology continues to drive change and shape the world.

In his essay, he notes this phenomenon of “enveloping the environment into an AI-friendly infosphere”, which has been going on for decades.

Toaster

Machines are improving at a fierce rate and taking on a more central part in our lives but, Floridi writes, they are not “the children of some sci-fi ultra-intelligence, but ordinary artefacts that outperform us in ever more tasks, despite being no cleverer than a toaster”.

In the immediate future, there is a need to control the development of this technology so that people can benefit and be protected.

In the first case, he uses the example of the car manufacturer who, plush with driver and automobile data, can team up with a local authority to improve road safety. Not because they will profit but because society should benefit from these great leaps in technological know-how, a kind of data altruism.

We should also avoid being too prepared to take humans out of many of the tasks technology could do for itself. Getting back to Irving John Good's point about paying attention to science fiction, this warning is well illustrated by the 1983 film War Games in which the US military tries removing the human response from its nuclear missile deterrent, with edge-of-the-seat consequences. Technology's limits ought not to be underestimated.

Back in the real world, Floridi gives the example of predictive policing technology used to effectively position officers in certain areas to reduce crime. It may work but it is “also potentially crippling” in terms of human behaviour, bias and unfair treatment of certain parts of a city.

Political element

There is also a political element to the debate. While machines replace working humans, there are fears for what, if anything, might replace those jobs. US president

Donald Trump

recently promised to reopen factories that closed over the years but his critics argue that even if he did, many of those jobs would be replaced with technology.

“I think leaders like Trump, they don’t understand the world as it is moving forward. They do understand the human perception of this change which is people are scared, upset; they are worried,” he says, so they “address the concern with old solutions”.

In the future AI will offer many of its own solutions, and no doubt problems, but they are, for now, unlikely to include homicidal robots or machines that consider and decide the merits of making their human masters obsolete.

Society must have a plan and avoid a trial-and-error approach to technology, something Floridi says “comes with a cost and some of the costs are irreversible. Once you drop two atomic bombs it’s a bit too late to say I’m sorry, that was a mistake.”

Parting warning

In a parting warning at the end of his essay, he appeals for a course that would see the development of human and environmentally friendly AI. Technology that should “make us more human” – a future in which we avoid the temptation to misuse our own ingenuity.

In the meantime, while technology continues to consume us in our scuba-diving existence, we can dispense with the horrors of science fiction or what Floridi calls the “potential monsters lurking” in the corner of a darkened room. At least for now.

Science fiction is “where we free our imagination” but, he continues, we should never confuse entertainment with reality.

"It would be like . . . having read a lot of Harry Potter, being worried about He Who Must Not Be Named. I can enjoy Harry Potter without being worried about the dark force," he says.

“Darth Vader should be left to Hollywood.”

Mark Hilliard

Mark Hilliard

Mark Hilliard is a reporter with The Irish Times