My class of engineering students in the mid-1970s were among the first undergraduates at Trinity College Dublin to receive formal training in computer programming. As students of software, we were frequently told that computers do not make mistakes. Any errors in the results from our programs could only be due to our mis-instructing the computer on exactly what it was to do.
Some of my professors and lecturers had more than a passing interest in 'artificial intelligence', a research area then receiving heavy investment worldwide. The leading advocates of the field, such as Herbert Simon (of Carnegie Mellon University) and Marvin Minsky (of Massachusetts Institute of Technology), were predicting that machines "will be capable, within 20 years, of doing any work a man can do".
The approach was a combination of describing in software both rules (“if this occurs, then do the following...”) and logical deduction (“if you know this to be true, then you can assume the following is also true...”). There were well-funded projects to capture the rules and logic of many different domains, not least medical diagnosis, equipment maintenance and manufacturing operations.
When I finished my postgraduate degree in 1983, my first job was with the European Commission in Brussels. The European Union had established a major international research programme, Esprit, to directly respond to the Japanese Fifth Generation initiative. There was a deep strategic concern across the computer industry that the Japanese would come to dominate the global economy through machines whose hardware and electronics had been designed specifically to use expert systems, based on rule-based software and logic deduction – as opposed to generic hardware that was designed for general purpose usage.
However, by the end of that decade, with the collapse of attempts to build such tailored hardware, research funding worldwide for artificial intelligence rapidly diminished.
Global research
In the last five years or so, machine learning has emerged as a substantial area of global research and considerable investment. Noteworthy milestones include 1997 when IBM's Deep Blue became the first chess-playing program to defeat a reigning world champion (Gary Kasparov); 2011, when IBM's Watson beat the champions of the quiz game Jeopardy!; the emergence of Google Translate to do a more than reasonable job of translating between the world's natural languages; and 2017, when DeepMind's AlphaGo became the first Go-playing program to defeat a reigning world champion (Ke Jie).
Today, there are both evangelists and sceptics who debate whether within 20 years machine-learning algorithms will have replaced many jobs worldwide, and will be capable of doing any work any human can do.
Face-recognition and other security systems, diagnosis systems, robot and self-driving systems can now all be deliberately misled by relatively simple adversarial attacks
The curious aspect of machine learning is that computers are not given a predefined list of instructions to explicitly follow in a software program, such as those which my colleagues and I were taught how to write as engineering students. Instead, machine-learning programs are shown examples – frequently, very many – of what is needed to be achieved and then, based on this training, they are able to make predictions of what to do in new situations.
For example, Google Translate was initially trained on transcripts from the European Parliament and the United Nations. The approach is intriguing because the machine itself discovers relationships and complex patterns. The patterns are often so complex that even when discovered by the machine and so become manifest, humans nevertheless have extreme difficulty in understanding how and why exactly do they work.
Vision system
If humans cannot always understand why machine-learning software works, it is even more challenging when machine learning does not work. In 2014, researchers at New York University and Google reported that a machine-learning vision system could be completely fooled by relatively small and almost imperceptible changes to an image, for example classifying a dog as a ship.
Last year, it was shown how suitably designed stickers over common traffic signs could disastrously confuse a road sign recognition algorithm, such as might be used in a self-driving cars. Also last year, an image-processing system asserted with absolute and bizarre confidence that it had identified a 3D view of a rifle, when in fact it was being shown a 3D printed model turtle, albeit with complex colour patterns on its shell.
Adversarial algorithms have now been discovered which can deliberately cause catastrophic failures of machine-learning systems. Worse still, these attacks can be almost undetectable to the human eye, and so a human operator may not realise that a system is becoming compromised. Face-recognition and other security systems, diagnosis systems, robot and self-driving systems can now all be deliberately misled by relatively simple adversarial attacks.
If machines can be deliberately forced to make mistakes, and if we do not understand how they really work and thus how they might fail, then machine learning algorithms face a major challenge in situations where safety is a fundamental concern.
Regulators are likely to insist that sufficient care is taken in engineering and auditing so that human life and valuable assets are not put at risk. Making machine-learning algorithms robust to concerted adversarial attacks has now, of extreme necessity, become a priority research area across the globe. If machine-learning applications cannot be safely engineered, they will commercially fail.