The destruction of mankind by swarms of intelligent robots is a film trope long held dear by Hollywood producers and directors.
Whether it’s the Terminator, or Ultron or the computer in WarGames, ever since the introduction of the first computer, we’ve been worrying about ways in which such technology could destroy us.
Thankfully, visions of an invading army of kill-bot is well wide of the mark, at least within the bounds of foreseeable technology, but the fact is artificial intelligence (AI) is already being used for nefarious purposes, and the world’s biggest nations are falling behind in their attempts to keep ahead of the threat.
AI is essential to the cyber space. In cyber, one can create hundreds or thousands of 'software agents' far easier than building physical tanks, airplanes, ships or missiles
That is certainly the conclusion made by the National Security Commission on Artificial Intelligence, which reported recently to both the US Congress and President Joe Biden that: “America is not prepared to defend or compete in the AI era. This is the tough reality we must face. And it is this reality that demands comprehensive, whole-of-nation action.”
One of the report’s authors was Dr Steve Chien. Dr Chien’s official title is senior research scientist, and technical group supervisor of the artificial intelligence group and in the mission planning and execution section at the Jet Propulsion Laboratory. The laboratory, or JPL as it’s better known, is technically a part of the California Institute of Technology (Caltech) but it’s effectively a part of Nasa, and is the home of mission control for all unmanned space exploration.
The Voyager probes, currently crossing the boundary between our solar system and the vastness of interstellar space? Those are run by JPL. Ditto the Mars rovers, Perseverance and Curiosity, and the New Horizons probe that explored the cold, distant neighbourhood around Pluto.
Faster decisions
Dr Chien’s day job is essentially teaching robotic space probes how to think more like a human, so that when they are billions of miles from Earth, they can think for themselves, at least a little, and make better and faster decisions about where and how to point their cameras and various sensors, to help us make better sense of the solar system, galaxy and universe.
“I had the honour of supporting a congressional report on the role of AI in National Security” Dr Chien told The Irish Times. “AI is essential to the cyber space. In cyber, one can create hundreds or thousands of ‘software agents’ far easier than building physical tanks, airplanes, ships or missiles.
“By software agents, I mean software programmes, the logical extensions of the bots that allow hackers to take over computers and launch attacks from a myriad of hosts across the internet. But they are getting smarter and smarter, and more complex every day. It is a global software arms race, with many advantages to the attacker. The defender needs to win all the time. The attacker only needs to win some of the time to succeed. It is the classic asymmetric warfare.”
The report to which Dr Chien contributed warns that: “Our leaders confront the classic dilemma of statecraft identified by Henry Kissinger: ‘When your scope for action is greatest, the knowledge on which you can base this action is always at a minimum. When your knowledge is greatest, the scope for action has often disappeared.’ The scope for action remains, but America’s room for manoeuvre is shrinking.”
The report describes state-level adversaries already employing “AI-enabled disinformation attacks to sow division in democracies and jar our sense of reality”. Wouldn’t it have been nice to have had that particular warning, pre-2016?
There are even concerns that states, criminals or terrorists could buy commercially available drones off the shelf in an electronics shop, reprogramme them with fresh, malicious software and then add firepower. This doesn’t need to be the high-tech lasers of Hollywood lore – a simple lump of Semtex and a timer would be sufficient to sow chaos and cause damage and death.
Weaponised nerds
So, how can we defeat such tech-agile adversaries? Essentially, countries that want to defend against war-making computer code will need to change the way, and the people, they recruit for the military and security services. Out will go the traditional beefcake super-soldiers, and in will come weaponised nerds.
“We need to build entirely new talent pipelines from scratch. We should establish a new digital service academy and civilian National Reserve to grow tech talent with the same seriousness of purpose that we grow military officers,” said the report.
“So the differentiating factor is speed and smarts derived from AI, not numbers. So whoever has the smartest agents will win the cyber war. Whoever wins the AI competition will dominate the cyberspace. And in today’s world, everything is controlled by computers – power, water, communications, transportation – hence the importance of AI to our nations,” Dr Chien said, while also noting that such issues touch on his own heartland of space exploration.
“Cyber security is incredibly important to space missions, as we use computers to plan out and operate these extremely complex space missions. Any cyber threat could jeopardise the entire mission, so cyber is of paramount importance.”
Chien’s day job is effectively about harnessing the terrifying power of ballistic missile technology and using it to fire astronauts and robot probes into space, rather than hurling warheads at one another. There is potential for a massive programme of investment in AI defence policy to bring about similar improvements in technology that are more closely useful to the general public, including improving the general quality of life for an ageing population, learning and teaching, energy management and medical practice.
The key to any defence against malicious AI is the same as it’s ever been – innovation. When arrows were invented, armour was innovated, and the race between assault and protection has been running ever since. As the report notes: “If the investments needed are implemented, they will set the conditions to harness AI to tackle some of the biggest challenges in science, society, and national security.”