Unthinkable: Can a machine have a mind of its own?

A computer replicating the voice of Scarlett Johansson might be able to convince users that it is human – but does that in fact make it human?

Joaquin Phoenix in Her, which tells the story of a man’s relationship with his computer operating system
Joaquin Phoenix in Her, which tells the story of a man’s relationship with his computer operating system

The movie Her starring Joaquin Phoenix, which is being released here next month, tells the story of a man who develops a relationship with his computer operating system, voiced by Scarlett Johansson. Whatever about the credibility of this love affair, it raises a question that has occupied the minds of some great thinkers in recent decades: can a machine have a mind of its own?

Different answers have come from computer science, psychology and philosophy, which we explore here with Phil Maguire, co-director of the BSc degree in computational thinking at NUI Maynooth. Rejecting some of the more fanciful predictions of a computer-run world, he provides today's idea: Machines are incapable of replicating human consciousness.


Can a machine have feelings?
Phil Maguire: "The key question here is what we mean by having feelings. What is the criterion that we use for identifying systems that can have such experiences? Alan Turing, one of the founding fathers of computer science, suggested that if we cannot distinguish the output of a computer from that of a human, then we should attribute the same qualities to the computer as we do to people, such as thinking and having feelings.

"Others, such as John Searle, have argued that this is not enough. He used the example of a rule book, which somebody could follow to maintain a conversation in Chinese. Even though the person seems to be communicating in Chinese, they are simply following a set of rules and have no experience of the content at all."

READ MORE


At what point in technological progress can we say machines are no different to humans?
"The thing about a machine is that you know it has been constructed by humans, and thus it follows explicit rules. It is conceivable that, in the future, the output of a software programme might be so sophisticated that it seems as if you are interacting with another person. Your computer might display a mind of its own and successfully pass Turing's test. But there are still other tests it does not pass.

“For example, you can rip off the lid and see that the system follows a set of rules, as dictated by an arrangement of electronic circuits. As soon as you know the rules the system is following, you can break it down into components, thus destroying the illusion that it is ‘feeling’ anything. Intuitively, this isn’t something we can do with human consciousness.”


What might be a suitable test for intelligence in computers?
"The thing about Turing's test is that a programme might pass the test simply by exploiting weaknesses in human psychology. A more recent and stringent test is the Hutter Prize, which is offered to programmes that manage to describe as concisely as possible the first 100 million characters of Wikipedia.

"People are very good at 'compressing' this kind of data because they are aware of the patterns that link words and sentences together. For a machine to achieve the same level of data compression, it would need the same depth of understanding. It seems there is no way to cheat."

Are humans now caught in a trap of pleading exceptionalism, and if so will that plea become more tenuous as machines catch up or overtake human capacities?
"My personal view is that humans are indeed exceptional. A system must pass every conceivable test to be regarded as conscious, not just Turing's test. There must be no way in which the system can be broken down into a set of discrete rules.

“However, any machine built by human hands will always be limited by the fact that it adheres to a precisely defined mechanism. Computer programmes might become very sophisticated and seem human-like, but we will always know that they are just machines following a set of independent, unfeeling rules.”


Through technology, are we making ourselves redundant in the long run?
"Some researchers believe in something called the technological singularity. The idea is that, at some point in the near future, computers will gain the ability to improve themselves, leading to a sudden exponential take-off in machine intelligence. I personally think this idea involves a misunderstanding. Intelligence is something that is hard to achieve. By definition, there should be no easy way to create it.

"Machines can be used for carrying out mundane tasks that have already been comprehensively mapped by humans, but what if they break down? We will always need people to remember how to build them."

Are there other threats from technological progress?
"All of the knowledge that our civilisation has accumulated must ultimately be stored in human memory. My concern is that, with a crash in human population looming in the near future, innovation will grind to a halt. Birth statistics indicate that we have already reached peak child, meaning from now on there will be fewer and fewer babies born, possibly forever.

"With fewer people, there simply won't be enough human thinking time available to sustain progress or even to maintain the level of technology we currently possess. In that case, the question would be whether we can revert gracefully to an earlier state of technology or whether the system will have become so fragile that it will simply collapse."


ASK A SAGE

Question: "My boyfriend is constantly telling me he loves me, I mean, all the time; why is it getting on my nerves so?"


Jacques Lacan replies: "To love is to give something you haven't got to someone who doesn't exist."

Send your intellectual dilemmas to:
philosophy@irishtimes.com
Twitter: @JoeHumphreys42