A US air force colonel “misspoke” when he said at a Royal Aeronautical Society conference last month that a drone killed its operator in a simulated test because the pilot was attempting to override its mission, according to the society.
The confusion had started with the circulation of a blogpost from the society, in which it described a presentation by Col Tucker “Cinco” Hamilton, the chief of AI test and operations with the US air force and an experimental fighter test pilot, at the Future Combat Air and Space Capabilities Summit in London in May.
According to the blogpost, Col Hamilton had told the crowd that in a simulation to test a drone powered by artificial intelligence and trained and incentivised to kill its targets, an operator instructed the drone in some cases not to kill its targets and the drone had responded by killing the operator.
The comments sparked deep concern over the use of AI in weaponry and extensive conversations online. But the US air force on Thursday evening denied the test was conducted. The Royal Aeronautical Society responded in a statement on Friday that Col Hamilton had retracted his comments and had clarified that the “rogue AI drone simulation” was a hypothetical “thought experiment”.
‘Godfather of AI’ Geoffrey Hinton warns of ‘quite scary’ dangers of chatbots as he quits Google
‘People make assumptions about us’: How third level is becoming a real option for people with intellectual disabilities
Norma Foley’s approach to AI in the classroom is breathtakingly naive
Irish company leveraging AI to help brands communicate climate actions responsibly and avoid claims of greenwashing
“We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome,” Col Hamilton said.
The controversy comes as the US government is beginning to grapple with how to regulate artificial intelligence. Concerns over the technology have been echoed by AI ethicists and researchers who argue while there are ambitious goals for the technology, such as potentially curing cancer, for example, the technology is still far off.
Meanwhile, they point at longstanding evidence of existing harms, including increased use of, at times, unreliable surveillance systems that misidentify black and brown people and can lead to over-policing and false arrests, the perpetuation of misinformation on many platforms, as well as the potential harms of using nascent technology to power and operate weapons in crisis zones.
“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” Col Hamilton said during his May presentation.
While the simulation Col Hamilton spoke of did not actually happen, Col Hamilton contends the “thought experiment” is still a worthwhile one to consider when navigating whether and how to use AI in weapons.
“Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI,” he said in a statement clarifying his original comments.
In a statement to Insider, the US air force spokeswoman Ann Stefanek said the colonel’s comments were taken out of context.
“The department of the air force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Ms Stefanek said. – Guardian