Sunak’s AI adviser warns tech could help produce killer weapons within two years

`I think the thing to focus on now is how do we make sure that we know how to control these models because right now we don’t,’ says Matt Clifford

Matt Clifford is advising British prime minister Rishi Sunak (above) on the development of the UK government’s Foundation Model Taskforce, which is looking into AI language models such as ChatGPT and Google Bard, and is also chairman of the Advanced Research and Invention Agency (Aria). Photograph: Jordan Pettitt/PA
Matt Clifford is advising British prime minister Rishi Sunak (above) on the development of the UK government’s Foundation Model Taskforce, which is looking into AI language models such as ChatGPT and Google Bard, and is also chairman of the Advanced Research and Invention Agency (Aria). Photograph: Jordan Pettitt/PA

Artificial intelligence (AI) could have the capability to be behind advances that “kill many humans” in only two years’ time, according to British prime minister Rishi Sunak’s adviser on the technology.

Matt Clifford said that unless AI producers are regulated on a global scale then there could be “very powerful” systems that humans could struggle to control.

Even the short-term risks were “pretty scary”, he told TalkTV, with AI having the potential to create cyber and biological weapons that could inflict many deaths.

The comments come after a letter backed by dozens of experts, including AI pioneers, was published last week warning that the risks of the technology should be treated with the same urgency as pandemics or nuclear war.

READ MORE

Senior bosses at companies such as Google DeepMind and Anthropic signed the letter along with the so-called “godfather of AI”, Geoffrey Hinton, who resigned from his job at Google earlier this month, saying that in the wrong hands, AI could be used to harm people and spell the end of humanity.

Mr Clifford is advising the British prime minister on the development of the UK government’s Foundation Model Taskforce, which is looking into AI language models such as ChatGPT and Google Bard, and is also chairman of the Advanced Research and Invention Agency (Aria).

He told TalkTV: “I think there are lots of different types of risks with AI and often in the industry we talk about near-term and long-term risks, and the near-term risks are actually pretty scary.

“You can use AI today to create new recipes for bio weapons or to launch large-scale cyber attacks. These are bad things.

“The kind of existential risk that I think the letter writers were talking about is… about what happens once we effectively create a new species, an intelligence that is greater than humans.”

While conceding that a two-year timescale for computers to surpass human intelligence was at the “bullish end of the spectrum”, Mr Clifford said AI systems were becoming “more and more capable at an ever increasing rate”.

Asked on the First Edition programme on Monday what percentage chance he would give that humanity could be wiped out by AI, Mr Clifford said: “I think it is not zero.”

He continued: “If we go back to things like the bio weapons or cyber (attacks), you can have really very dangerous threats to humans that could kill many humans – not all humans – simply from where we would expect models to be in two years’ time.

“I think the thing to focus on now is how do we make sure that we know how to control these models because right now we don’t.”

The technology expert said AI production needed to be regulated on a global scale and not only by national governments.

AI apps have gone viral online, with users posting fake images of celebrities and politicians, and students using ChatGPT and other “language learning models” to generate university-grade essays.

But AI can also perform life-saving tasks, such as algorithms analysing medical images such as X-rays, scans and ultrasounds, helping doctors to identify and diagnose diseases such as cancer and heart conditions more accurately and quickly.

Mr Clifford said that AI, if harnessed in the right way, could be a force for good.

“You can imagine AI curing diseases, making the economy more productive, helping us get to a carbon neutral economy,” he said.

The Labour Party is pushing for ministers to bar technology developers from working on advanced AI tools unless they have been granted a licence.

Shadow digital secretary Lucy Powell, who is due to speak at TechUK’s conference on Tuesday, told the Guardian that AI should be licensed in a similar way to medicines or nuclear power.

“That is the kind of model we should be thinking about, where you have to have a licence in order to build these models,” she said. - PA