Subscriber OnlyTechnology

Russia now wielding AI tools in online disinformation wars

Karlin Lillington: US expert argues for tighter controls and an end to lax containment efforts

Salvo said Russia is using “proxy outlets” to game search engine results and push their content consistently into the top 10 search results for Google search and news services.
Salvo said Russia is using “proxy outlets” to game search engine results and push their content consistently into the top 10 search results for Google search and news services.

Despite government and corporate attempts to bring in bans and restrictions, Russian online disinformation campaigns continue to reach the eyes and ears of North Americans and Europeans – largely because containment efforts are too lax, according to a US expert on Russian affairs.

“Russia’s been in the disinformation game for decades so this is not a new challenge, but the scary part about Russian disinformation campaigns right now is how they’re able to evade the various sanctions and bans that the EU, in particular, and the US have put on official Russian state-sponsored media outlets,” says David Salvo, senior fellow and managing director of the Alliance for Securing Democracy (ASD) at the German Marshall Fund.

The fund is a non-partisan American public policy think tank that seeks to promote co-operation and understanding between North America and the EU. Salvo was in Ireland this week to address a conference on the current situation in Ukraine, hosted by the US and Ukrainian embassies.

In an interview, Salvo said Russia is using “proxy outlets” to game search engine results and push their content consistently into the top 10 search results for search and news services provided by platforms such as Google and Bing.

READ MORE

Generative AIs are being used to “essentially repurpose content from official Russian state-sponsored media like RT and, in lightning speed, create 100 articles with minor differences that get by plagiarism detectors through networks of proxy sites”, often sham news sites. Because AI is now readily available, there’s been “a proliferation of threat actors and an increase in speed”.

He acknowledges that the methods aren’t “appreciably changing the debate in Ireland or in Brussels or in Washington, but it’s still having an effect”.

For example, about six months after Russia’s invasion of Ukraine, Russia “started to really lean into spreading and amplifying narratives about the economic costs of supporting Ukraine” as those arguments started to emerge in the US, particularly from Republican politicians.

Yet, Salvo says, when ASD analysed social media statements last autumn from US candidates before last November’s midterm elections, it found only about 10 per cent of comments on Ukraine from Republicans “were negative or called into question ongoing support”.

However, “the most engaged-with comments on social media were all anti-Ukraine . . . the loudest voices in the room were anti-Ukraine, even though they are the distinct minority”.

Most of that engagement was probably “organic” – from US supporters – but Russia swiftly began to amplify those views and create “feedback loops” to boost further engagement, by Russian state news outlets such as RT, and “grey accounts” that aren’t directly connected to state-supported news sites but are “likely affiliated in some capacity with Russian state-sponsored actors that also then weigh in and boost these narratives”.

This then gives a minority of US voices “an outsized influence on the political debate”.

Of the major online platforms, Salvo says X (formerly Twitter) “obviously is still the worst offender in light of the changes there [after Elon Musk purchased the platform]”. But the same problems are happening across all the platforms, he says.

ASD found 78 accounts on TikTok that were “either openly attributed to Russian state-sponsored media or very clearly Russian state-sponsored media, and most of them were still unlabelled as Russian propaganda,” though they are supposed to be so identified. TikTok only started to identify state-connected media generally after pressure from western and Ukrainian officials.

As of March, the TikTok accounts had 14 million followers and 319 million likes.

“TikTok was lax in enforcing its own labelling policy,” Salvo says.

The Spanish-language version of RT, which had six million subscribers before being removed by YouTube, and channels related to its shows, still dodge YouTube’s ban, often by removing media logos or concealing them, he notes. RT similarly has evaded bans on Facebook and Instagram.

“Meta has allowed this to metastasise and hasn’t taken action against them,” Salvo says.

Meanwhile, the “hack and leak” tactics that were successful for Russia in the 2016 US presidential election seem to have faded away. But with elections looming in the US, EU and member states such as Ireland, they could reappear, he says.

Ireland was among the countries to receive a recent formal warning from the US about the possibility of Russian election interference. Upcoming EU parliamentary elections might offer a “soft target” compared to the US presidential race, he adds.

He doesn’t believe Russia ever seriously thought Donald Trump would win the 2016 election. The Kremlin’s main goal was broader, “to inject a huge degree of chaos to undermine confidence in election integrity, and that’s been the gift that keeps on giving. We’re still dealing with consequences of that”.

What would he like to see done to more effectively limit Russia’s online influence?

“I’d like to see more consistent enforcement of European laws and regulations,” Salvo says, with a note of exasperation.

“There’s some compliance, but again, it’s inconsistently enforced. And the punitive measures, I think, are probably paltry compared to what these companies can withstand. So there’s no real incentive to change behaviour radically.”