As the global AI race intensifies, Europe tries to close the large gap

At a summit in Paris, the EU pledges to start putting its money where its mouth is when it comes to artificial intelligence

US vice-president JD Vance efused to sign the summit declaration. Photograph: Ludovic Marin/AFP via Getty
US vice-president JD Vance efused to sign the summit declaration. Photograph: Ludovic Marin/AFP via Getty

French president Emmanuel Macron struck an upbeat tone. The European Union was “back in the race” to become a big player in artificial intelligence (AI) technology, he declared. Macron was speaking in the Grand Palais, where he had gathered world leaders and tech executives for a two-day summit on AI.

Ever since OpenAI’s generative chatbot ChatGPT was publicly launched in late 2022, governments and companies have been trying to position themselves on the right side of the rapidly advancing technology.

The market has largely been dominated by US tech giants and start-ups, buoyed by huge amounts of venture capital funding, which some fear is creating a bubble waiting to pop.

The recent arrival of DeepSeek, a Chinese AI chatbot that can reportedly deliver similar results to rival ChatGPT at a fraction of the cost, has spooked markets and investors.

READ MORE

Billions of dollars were knocked off the market value of AI chipmaker Nvidia, a key player in the US industry. The apparent success of DeepSeek challenged the assumption that the United States might be too far ahead to be caught in the race to dominate AI technology.

“There is a desire from some of the players to convey this message that it is game over. The recent announcement and news in China shows this is not the case,” said one French official involved in organising the Paris summit.

Will AI really conquer all before it? Some insiders privately voice doubts about the ebullient narrativeOpens in new window ]

That was the message that Macron and European Commission president Ursula von der Leyen were keen to project this week. They also didn’t come empty-handed. Macron announced that France had secured funding of €109 billion, large chunks of which will come from private sources, for new data centres and to invest in AI.

The European Commission, the EU’s executive arm, came with a plan to set up AI research facilities, which officials likened to the Cern lab in Geneva that hosts the Large Hadron Collider.

Von der Leyen said the EU planned to build four big facilities, or “gigafactories”, where European start-ups and other companies can train and develop AI models and software. The research sites will be funded from a €50 billion pot of investment to boost the EU’s AI industry.

The announcement is a “welcome signal” that Europe is looking to become more economically competitive, says Erik O’Donovan, head of digital policy at business group Ibec.

The commission-backed funding is in addition to the €150 billion for AI projects announced the day before by a consortium of European companies and investors. More than 60 companies are part of that project, including the likes of Spotify, Deutsche Bank, Airbus, Volkswagen, Philips, Siemens and French AI company Mistral.

Politicians and executives hope the significant step change in funding will help Europe make up ground it has lost to China and the US in the AI race.

Industry and business groups are likely to argue for regulations the EU introduced last year to be watered down as part of this push to close the gap with Silicon Valley. It is a wide gap.

President of the European Commission Ursula von der Leyen and US vice-president JD Vance at the US embassy in Paris on Tuesday. Photograph: Ian Langsdon/AFP via Getty
President of the European Commission Ursula von der Leyen and US vice-president JD Vance at the US embassy in Paris on Tuesday. Photograph: Ian Langsdon/AFP via Getty

Speaking at the summit, US vice-president J D Vance said the new White House administration believed “excessive regulation” could kill off the AI sector as it was taking off.

“We need international regulatory regimes that foster the creation of AI technology rather than strangle it,” Vance said. Commenting on the EU approach to regulation, he said the race to dominate AI was not going to be won by “hand-wringing” over safety concerns, even as he insisted that “the Trump administration will ensure that the most powerful AI systems are built in the US, with American-designed and -manufactured chips”.

The idea for a package of rules governing how AI technology could be used was first proposed by the commission in 2021. Former Fine Gael MEP Deirdre Clune was one of the negotiators in the European Parliament working on the legislation.

She says that, initially when she talked to constituents, their eyes would “glaze over” when she mentioned her work on the draft legislation. After ChatGPT came on to the scene, it became interesting to people.

“The thing was changing by the hour, by the day,” she says. MEPs wanted to make the law “flexible”, knowing it might need to be updated as the technology advanced. “We tried to make it as innovation-friendly as possible,” she says.

“It has huge potential, but it needs to be pinned down. It needs to be regulated and have oversight ... You can’t have things run riot, making decisions about people,” says the former Ireland South MEP.

The AI Act, which began to come into force last year, puts extra obligations on use of the technology in “high-risk” settings, such as healthcare, banking, law enforcement and education.

EU lawmakers were concerned that AI systems might be used to unfairly score and screen applicants for services. There were also fears about bias being baked into algorithms, causing the technology to discriminate against certain groups. For example, if their name indicated they were a foreign national, or they had an address in a poor neighbourhood.

On the minds of many MEPs was a scandal that eventually brought down former Dutch prime minister Mark Rutte’s government in 2021. Tax authorities in the Netherlands had used an algorithm to flag suspected cases of child-benefit fraud. The system incorrectly targeted thousands of families, who were then aggressively pursued to repay benefits they had been entitled to claim.

The controversy saw people forced to sell their homes and left many destitute after being pressured to repay the money.

I taught the machine how to operate like a human being, basically so that it could replace me in a few years

—  Former Facebook content moderator Sonia Kgomo

Brando Benifei, an Italian MEP from the centre-left Democratic Party, was one of the lead negotiators of the AI Act. “We need rules and this is what we have done with the regulations in Europe,” he says.

There will undoubtedly be a push to pare back the new rules, or slow-walk their full implementation, he says.

“We cannot just say we slash our regulations and we will become competitive.”

A bigger problem is the lack of a single market for capital in the EU, which makes it more difficult for companies and start-ups to raise investment across borders to expand, he says. Easier access to capital financing and infrastructure are bigger concerns for European companies.

Peter Sarlin, chief executive of Silo AI, a Helsinki-based “lab” that develops the technology, says the definition of what counts as AI is wide. It ranges from generative text apps such as ChatGPT to tech in vacuum cleaners and lawnmowers. Companies operating in the field run up against “quite significant complexity” in navigating regulations, he says.

Far removed from boardrooms of tech start-up and talk of venture capital funding rounds are low-paid workers, who train some of the models and algorithms that underpin the technology.

One former Facebook content moderator, Sonia Kgomo, spent two years training the social media platform’s automated system, to improve its accuracy in removing content that breached its community standards.

“I taught the machine how to operate like a human being, basically so that it could replace me in a few years,” she says. Kgomo is not employed directly by Facebook, but by a company contracted to provide content moderators in Nairobi, Kenya. “I was viewing a lot of graphic content, it involved a lot of human mutilation, it involved a lot of suicide content ... It involved a lot of child sexual exploitation,” she says.

Kgomo now works as a labour organiser with UNI Global, focusing on improving conditions for low-paid workers in the sector.

Tech giants are developing AI products worth billions of dollars, yet some of their contract workers cannot afford therapy to cope with the disturbing content they have to screen to train those same AI models, she says.

“AI needs the confidence of people and has to be safe,” Von der Leyen insisted at the summit but both the United States and the UK refused to join dozens of other countries in signing a declaration to ensure that the technology was “safe, secure and trustworthy”, dealing a setback to efforts to build international consensus around the technology.