Different approaches taken by two of the biggest social media giants in recent days have highlighted the high stakes of managing problematic online content.
As protests against the killing of black people by police swept the United States, president Donald Trump wrote on Twitter and Facebook "Any difficulty and we will assume control but, when the looting starts, the shooting starts".
In a first in its dealings with the US president, Twitter obscured the post with a warning label, saying it violated its rules against glorifying violence.
But Facebook took a different approach, leaving the president's post as it was. Chief executive Mark Zuckerberg said that while he found the post "deeply offensive", it did not violate his company's policies.
Both companies have faced consequences.
Twitter has been the focus of a backlash from Trump, who threatened to “strongly regulate” social media companies or “close them down”. For Facebook, the heat came from employees who openly criticised the company’s inaction. Two of them publicly quit.
What kind of posts should not be allowed on social media, who should make those decisions, and how, is a regulatory puzzle. And Zuckerberg has asked the European Commission to solve it.
In a public discussion with internal market commissioner Thierry Breton last month, the Facebook boss asked the commission to set rules for social media companies to follow. "Basically, the platforms shouldn't be left to govern themselves," Zuckerberg said. Europe had the opportunity to set a standard for the world, he argued, and should do so before China did.
It might seem a strange position for Zuckerberg to take, but it makes sense.
Such platforms are too big and too famous to host harmful speech without damage to their reputations. If there are to be rules, it’s easier for Facebook to follow one set of them for all of the EU rather than differing laws in each country. And it’s easier if the company does not have to do the expensive and difficult work of solving the riddle itself.
Currently, internet companies are not liable for content that is published on their platforms. But once notified of illegal content, they must act.
Monitoring content is a minefield. Both humans and artificial intelligence are bad at it in different ways.
Text is one thing: it can be scanned for problematic key words. Video is rich in information and time-consuming to monitor. One clip could contain multiple languages, fuzzy outlines, and key information only a person familiar with the context would pick up on. That’s why China’s persecuted Uighur minority use TikTok to find out about crackdowns, because censors can’t keep up.
Once online networks have a copy of a problematic video they automatically scan for the same clip and remove it. Once known to the system, the same videos are prevented from being published again. Hollywood films don't appear on YouTube, because such copyrighted content is scanned for and removed.
Mass shootings
But there’s no pre-emptive scanning, so deeply disturbing content such as videos of mass shootings have been broadcast live on Facebook.
Then there’s the question of how to manage communication on private messaging app WhatsApp, which has been a major source of misinformation on Covid-19 all over the world. Messages on it are private and encrypted, so there is no oversight.
The European Commission views problematic speech online as fitting into roughly three categories. The first is terrorist communication, which is clearly illegal. Then there is hate speech, for which different member states have different national laws. The third covers false information and foreign interference, and it is the most difficult to deal with, as it is fraught with subjectivity and trade-offs with freedom of speech.
The harm that misinformation can cause has become clear during the coronavirus pandemic, as poisoning incidents spike across the world due to the promotion of false cures.
But the dangers of regulating this kind of speech has also been made clear by the case of Hungary. There, the authoritarian government of Viktor Orban created sweeping new powers in response to the pandemic that included penalties of several years in prison for people who publish untrue or distorted facts. This has led to police raids on individuals who criticised the government on Facebook.
The EU this week launched a public consultation to gather information on how it should update its 20-year-old regulation of internet services, the e-Commerce Directive, in a new Digital Services Act. The topics on the table are enormous, ranging from how to manage illegal activity on the internet to dealing with online monopolies.
When it comes to judging acceptable online content, the commission is likely to address only what is illegal, and steer clear of any matters that edge towards the sticky overlaps with freedom of speech.
Ultimately it may pass the buck back to the social media companies. “They have an interest in keeping their house in order because they have an interest in remaining an attractive place for information sharing,” one official said. “We cannot solve all the problems.”