A visibly uncomfortable, slightly waxen-faced Mark Zuckerberg interrupted what had been, up to that moment, some fairly robotic testimony to a US Senate hearing to directly address families whose children have been harmed or died as a result of content on his platforms. “I’m sorry for everything you’ve all been through. It’s terrible. No one should go through what you and your families have suffered,” he said.
The Guardian described it a “stunning moment”. The Facebook whistleblower, Frances Haugen, said it was “powerful” and “a turning point.”
Stunning? A turning point? Hardly. “Mark Zuckerberg embarrassed into another apology” is a headline that has been emerging from Silicon Valley over the past two decades with the depressing regularity of the latest extreme diet or exercise fad. “I’m the first to admit we’ve made a bunch of mistakes.” “We didn’t take a broad enough view of our responsibility and that was a big mistake”. “I want to take accountability for these decisions and for how we got here.” “I think I’ve grown and learned a lot.”
The subject varies, but the script doesn’t. We’ve made mistakes. We’ll try harder. And look, I’m a guy who can admit when he got it wrong. What those serial “sorrys” never acknowledge is that children or women being harmed on social media is not a mistake. It is not an unintended consequence; it is baked into the design. Social media relies on engagement, and a relaxed and emotionally stable user is not an engaged user. And so algorithms keep providing more disturbing, more discordant, more distressing material.
Defending Kyle Hayes’s award takes All Star level nerve
‘Organise childcare’ means one of two things: ask Granny or take a day off work
Hazel Behan and Gisèle Pelicot are not victims, but survivors setting the world on fire with their truth
My meeting with Mohamed Al Fayed went well. Then I mentioned the young women in short skirts
Large-scale studies have shown that the more time they spend on social media generally, the worse young people – girls especially – feel
Facebook has known for years precisely about the harms being suffered by children - particularly girls - on its platforms because of the way these algorithms work. Documents released by Haugen, a former Facebook manager, revealed that the company was fully aware that Instagram is deleterious to girls’ mental health, but chose to do nothing.
Meta, of course, is not the only culpable platform. Large-scale studies have shown that the more time they spend on social media generally, the worse young people – girls especially – feel. Research conducted on children aged nine to 17 by Ireland’s National Advisory Council for Online Safety (NACOS) found that one in four children had seen harmful online content in the past year, including self-harm content, material that could trigger disordered eating and violence. One in five had encountered sexually explicit material. Thirty per cent of young women aged 16 to 29 have experienced online harassment.
The web is simply “not working for women and girls”, said the man who created it, Tim Berners-Lee, four years ago. And that was before Elon Musk bought X, before the advent of artificial intelligence (AI), before the online war on women and girls stepped up another gear.
[ Time to stop the doublespeak on artificial intelligenceOpens in new window ]
As Zuckerberg was preparing for his Congress testimony, explicit “deepfake” images of pop star Taylor Swift were flooding the social media platform X. Created with open source AI software, the fake images migrated from 4chan to Telegram and then to X, where they went viral. One racked up 47 million views before it was removed – and it was only taken down after the singer’s fans flooded the social media app. But it was a warning bell for women. If it can happen to Swift with all her resources, it can happen to anyone – literally anyone who has ever posted a selfie or been tagged in a photo on the internet. And good luck to anyone other than Swift trying to get it taken down.
The 17-year-old Marvel actor, Xochitl Gomez, spoke recently on a podcast about her efforts to get deepfake pornographic images removed. “It made me weirded out... I wanted it taken down... This has nothing to do with me. And yet it’s on here with my face.”
[ Dave Fanning's defamation case is a new legal frontierOpens in new window ]
There have been a lot of warnings about how artificial intelligence is going to replace jobs, revolutionise medicine and disrupt elections, but very little discussion about one thing we can say with virtual certainty: it will make the internet a worse place for women and children. AI and social media together form a highly toxic compound. The UK-based Internet Watch Foundation (IWF) recently said its “worst nightmares” about AI-generated child sexual abuse images are materialising, including new videos being generated using existing images of child abuse victims; AI tools being used to “de-age” celebrities and depict them as children in sexual abuse scenarios, and even to remove the clothes of children in images posted online. An independent review of the evidence on online harms on video-based platforms (VSPS) prepared for Coimisiún na Meán in September points out that the law-enforcement response is complicated by the fact that it is “difficult to determine whether the victims in the content are real or artificially generated”.
Brace yourself for another two decades of tech executives asking for forgiveness rather than for permission. Of contrived apologies. Of we-made-mistakes
Better regulation of platforms, more sophisticated detection and moderation tools and enforcement must be part of the solution – but nothing will change until the tech companies see a threat to their bottom line. And there are pragmatic considerations for Ireland. The independent expert report for Coimisiún na Meán concludes that “regulation of VSPS without careful consideration of providers’ rights (overregulation) could lead to a removal of platform autonomy or impede company rights to fair competition, and may result in companies relocating” with negative economic consequences.
Brace yourself for another two decades of tech executives asking for forgiveness rather than for permission. Of contrived apologies. Of we-made-mistakes. Of we-couldn’t-have-seen-this-comings. Of crocodile tears and faux shock and of tech titans becoming more titanic. At the hearing, once his not-quite-grovelling apology had been delivered, Zuckerberg instantly reverted to the script. “The existing body of scientific work has not shown a causal link between using social media and young people having worse mental health,” he said, spinning so fast it was a wonder he didn’t throw up.