Special Report
A special report is content that is edited and produced by the special reports unit within The Irish Times Content Studio. It is supported by advertisers who may contribute to the report but do not have editorial control.

Will artificial intelligence take over the world?

Despite the hype surrounding AI, machines can still only perform narrow tasks that human intelligence has taught them, say experts

“We haven’t created an artificial intelligence yet. Research in this area has allowed us to do useful things, but it isn’t sentient and it doesn’t understand what it is doing – it is just blindly following an algorithm that humans have designed.” Photograph: iStock
“We haven’t created an artificial intelligence yet. Research in this area has allowed us to do useful things, but it isn’t sentient and it doesn’t understand what it is doing – it is just blindly following an algorithm that humans have designed.” Photograph: iStock

Much like human intelligence has evolved over time, so too will artificial intelligence. The difference is that this evolution is being guided by humans and getting to the next level with this technology raises many important – and uncomfortable – philosophical and ethical questions.

As we learn more about the possibilities of AI, broad classifications have been given to the different stages: artificial narrow intelligence (ANI), artificial general intelligence (AGI) and artificial super intelligence (ASI). For the uninitiated, what does each stage represent, and how close are we to the final stage of evolution?

Prof Michael O’Neill is director of the UCD Natural Computing Research & Applications Group. He explains that it can be beneficial to define where we are when it comes to progress in artificial intelligence.

“These are useful terms in a sense that where AI technology is now is very much in that narrow space and by narrow, that for me means that the technology is focused on a specific problem,” he says.

READ MORE

“While a technology might exhibit behaviour which is seemingly intelligent or might have a good performance on a specific task or a specific problem, outside of that problem it won’t function and can’t work.” The narrow domain includes machines or software that are focused on just one task – such as driving a car, a voice assistant, or playing chess.

Yet O’Neill believes that the term AI is often misused and debate rages as to what qualifies as true artificial intelligence. Although machines may appear to have mastered certain tasks, it’s only because we have taught them how to do so, he says. “Getting technical, we haven’t created an artificial intelligence yet. Research in this area has allowed us to do useful things, but it isn’t sentient and it doesn’t understand what it is doing – it is just blindly following an algorithm that humans have designed.”

Whilst there has been major progress in developing and applying AI in recent years, the reality is that artificial intelligence (AI), according to the definition of intelligence, is still very much in its infancy, agrees Prof Damien Coyle, director of the Intelligent Systems Research Centre at Ulster University.

“Intelligence is a measure of an agent’s ability to achieve goals in a wide range of environments and tasks and includes reasoning, memory, understanding, learning and planning,” says Coyle

Examples of AI include IBM’s Watson supercomputer, which won American TV show Jeopardy!, or the world-beating chess computers such as Deep Blue or Alpha Zero.

‘Not exactly game-changers’

“Very impressive, yes, but not exactly game-changers yet as they have learned from fairly narrow domains of knowledge or require millions of trial-and-error actions in order to act optimally,” says Coyle.

“The main reason we use games to train and test AI is that they provide a set of diverse tasks and are also good simulators which are well-optimised, so we can generate scenarios that have clearly defined goals, an action space, and rewards and we can measure the progress and evaluate performance, so it’s easy to take a game and train on it, rather than, let’s say, using a robotic task in a complex environment or a real-life problem with many variables and unknown states,” he explains.

“We now, however, have powerful enough high-performance computing systems that enable AI to learn from trial and error,” he adds.

Despite this, current technology remains very much within the realm of ANI, with AGI some way off.

Steve Wozniak, the co-founder of Apple, devised "The Coffee Test", which outlined what he believed constitutes true AGI: "A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons."

Robots won’t be making you coffee any time soon. According to Coyle, AGI is defined as AI that can carry out any cognitive function that a human is capable of and not just a set of specific narrow tasks – “so we are talking about nearly human-level intelligence. While some AI can do things that humans can do, this is typically programmed and learned. The difference is that AGI will reflect on its goal and decide whether to adjust something, so it has a level of volition, as well as perhaps self-awareness and consciousness, and we are not at that point.”

This was, perhaps, most evident when attempts to teach a neural network humour using a database of 43,000 ‘what if’ jokes failed spectacularly – not even a wry smile was raised at the resulting “jokes”.

This means humans are still very much in control: “Everything that AI does, we as humans programme it or feed it lots of data and it learns through repetition and so on – it can’t just evolve and learn other areas it hasn’t been trained on. You still need humans there to use their superior forms of intelligence.”

And while there are some applications of AI with a certain degree of autonomy, this is still programmed in – by humans – and exercised within a very controlled scenario. Coyle adds that a good example is the driverless car.

“Things like driving a car are quite complex and while significant work has been done on autonomous and driverless vehicles, they still aren’t out there in everyday life because there’s a lot of unanticipated scenarios, scenarios that humans can deal with quite easily but an AI in a car would fail dramatically, for example, a sticker placed on a stop sign being interpreted incorrectly as a speed limit sign,” he says.

AGI requires higher-level thinking, as well as awareness and adaptability from the machine. This could open a can of worms. “If there was AGI – and there isn’t – it could be set any problem and it could attempt to solve that problem. How well it might solve any problem is open to debate and there are theoretical and philosophical conversations all around that,” Coyle admits.

Ethical dilemmas

If AGI conjures up ethical dilemmas, artificial super intelligence is a whole other ball game.

“Artificial superintelligence is a potential form of AI in which AGI builds more AGI, if you want to put it in simple terms,” Coyle explains. “Assuming you have AGI that can perform at the levels of humans and has a level of self-awareness and consciousness and has volition, then it may be able to create new artificial intelligence that builds on that level.” If this is continually happening, then the upshot is that super intelligence may emerge, which could, in theory, go beyond human intelligence but this is only possible in the realms of science fiction, he says.

Indeed, it is O’Neill’s belief that ASI is not simply an inevitable outcome. “For me, ASI is scaremongering,” he says. “It is an overhype of the technology. I am not aware of any serious researchers who are trying to achieve a super intelligence. We are so far removed from achieving an actual artificial intelligence that super intelligence is not something that is on our radar – it is just hype.”

Yet he believes the philosophical and ethical debates surrounding artificial intelligence must consider these potential scenarios, however hypothetical they may be.

“With any new technology there are implications, both positive and negative, and AI is no different. It is very healthy for society and researchers to be considering the ethical and social responsibility that they have when undertaking research in this space.

“We have to ask the question, what if we actually create an artificial intelligence – what does it mean, and can we trust it to behave the way we want it to?”

Danielle Barron

Danielle Barron is a contributor to The Irish Times