WorldAnalysis

A global pact to keep AI in check: Is that really so utopian?

Expert in international digital governance, Thorsten Jelinek, advocates an ethical approach that imagines AI as humanity’s moral partner

The new iPhone 16 features Apple Intelligence - the new platform for artificial intelligence (AI) capability. Photograph: Shutterstock

Among the declarations to be adopted at the United Nations Summit of the Future in New York is a Global Digital Compact aimed at closing digital divides within and between countries. It will also commit the UN’s 193 member-states to advance responsible data governance and to “enhance the international governance of Artificial Intelligence (AI) for the benefit of humanity.”

Many of the pact’s objectives are about expanding access to the benefits of digital technology, including AI, beyond richer countries and the wealthier people within societies. But where AI is concerned, there is a focus on the risks as well as the benefits of the technology. “We urgently need to inclusively assess and address the potential impact, opportunities and risks of AI systems on sustainable development and the wellbeing and rights of individuals,” a draft of the pact says.

The US, the EU and China have been converging on a risk-based approach to regulating AI, although fierce competition for dominance in the field has so far prevented them from agreeing a common framework.

A massed army of audio bots is coming over the hill. What will it do to us?Opens in new window ]

Thorsten Jelinek, an expert in international digital governance at Berlin’s Hertie School and the Taihe Institute in Beijing, believes a new approach is needed. “I think it’s utopian to think that a risk-based approach based on regulation, which is the dominant one, will save us from a ubiquitous, highly capable AI‚” he said.

READ MORE

Jelinek advocates the development of an ethical approach that imagines AI as a moral partner for humans but in the meantime, he suggests that a practical model is the Open Skies Treaty. This allowed participants to carry out unarmed aerial surveillance over their territories to offer mutual reassurance that the other side was not about to attack.

“Neither side would disclose their AI models, but they will disclose eventually the capability of those models and inform the other side,” he said. “So when they discover what a highly capable frontier AI can do, then the other side is not surprised. It’s a minimal approach, but I think better than nothing.”