No more self-regulation for the tech sector. That was the message that was sent to Big Tech by the introduction of the online safety code. The new rules are part of a raft of new legal restrictions and responsibilities placed on technology companies to ensure that harmful content is not easily available online.
The code is designed to make online services safer not just for children, but for everyone, tackling everything from illegal content to dangerous challenges – anything that can be deemed harmful to children and the wider population.
It also requires platforms to put in place appropriate forms of age verification, to protect children, and implement parental controls to help parents limit time spent online and the type of content children can see. And they must have clear reporting systems for content that violates the rules, and take action on those reports.
But who decides what is harmful? Who does the act actually apply to? And what are the penalties for breaching the code?
READ MORE
The code, which specifically focuses on video sharing platforms, has been several years in the making. It was enabled by several pieces of legislation – the Online Safety and Media Regulation Act 2022, which was signed into law by President Michael D Higgins in December 2022, the EU Digital Services Act and the EU Terrorist Content Online Regulation. Work began in earnest with the establishment of Coimisiún na Meán (CnM) in March 2023 and the appointment of Ireland’s first online safety commissioner, Niamh Hodnett.
CnM put out a call for submissions on the proposed code, and developed a draft code that it put out for public consultation, receiving around 1,400 responses that helped shape the new rules.
Based on that, it revised the draft code, coming up with a legally binding set of rules that regulate content on the platforms designated as video sharing platforms. It was formally adopted in October 2024.
What does the code cover?
The code focuses on harmful and illegal content on video-sharing platforms in a number of forms, from the video content created by users to commercial video material that is available on these platforms.
It is not just the video itself, but also the captions that accompany it, and comments associated with it.
Who is covered by the legislation?
If your platform is about sharing videos and you are European headquartered in Ireland, then the code applies to your service.
Coimisiún na Meán has designated 10 platforms that are covered – for now – including the obvious ones: Facebook, Instagram, TikTok, X, YouTube and LinkedIn. Udemy, Pinterest, Tumblr and Reddit were also on the list published by the commission in December 2023.
Not everyone has taken the news well, with both Tumblr and Reddit taking cases to the High Court to argue that they should not be designated as video-sharing platforms. Those cases were dismissed in June last year.
What does the code mean for these online services?
Aside from ensuring that certain content is not uploaded to the site, video-sharing platforms are required to put in robust age verification systems to ensure that younger users are not gaining access to harmful content. The code also compels them to have parental controls available, allowing parents to not only limit the time spent online and the type of content users can access, but also who can see their content online.
There must also be a transparent and easy-to-follow reporting system for content that does breach the rules.
The more general rules were implemented in October, but the commission gave platforms a grace period – nine months – to allow the video-sharing platforms to make any changes needed to systems and other preparations that were needed to comply with the age verification and content rules in Part B of the code.
That gave platforms “more than enough time” in the eyes of some child-safety campaigners to develop robust age verification systems, stringent content controls to prevent exposure of children to harmful material, and easy-to-use reporting systems.
Platforms serving up such content can’t rely on simply asking users how old they are, and trusting they will tell the truth. Although the code doesn’t say exactly how companies must verify ages, Hodnett has made clear that “robust, privacy-respecting” verification measures are needed.
What is considered harmful content?
There is a wide definition of harmful in the code, from the obvious candidates – adult content such as pornography and violence – that must be restricted to prevent younger users from easily stumbling across it. Cyberbullying, self-harm and suicide, content that promotes or glorifies eating disorders, and incitement to hatred and violence is also covered. Even dangerous challenges that go viral on social media can be defined as harmful to younger users, and therefore subject to the code.
Failure to meet these new obligations will mean fines of up to €20 million, or 10 per cent of the platform’s annual turnover – whichever is greater.
What about its critics?
X, the platform formerly known as Twitter, has taken a court case challenging the rules. The platform claims Coimisiún na Meán engaged in “regulatory overreach” in its approach to restrictions on certain video content, saying the code contradicts Irish legal requirements for protecting and balancing fundamental rights, particularly freedom of expression.
That case is currently working its way through the courts.
The code has been widely welcomed elsewhere but there have been a few concerns raised. Privacy campaigners, for example, have expressed concerns about storing the data required for age verification.
Some campaigners, meanwhile, don’t think the new code goes far enough. Child online safety charity CyberSafeKids, for example, welcomed the code as a “step in the right direction”, describing it as “a milestone” that shifts legal responsibility on to tech companies to protect children online.
“This shift finally places a clear obligation on platforms to face the reality that underage users are accessing harmful content daily on their platforms, and to implement effective safeguards,” Cybersafe Kids said.
However, it noted that the code only applied to the 10 platforms identified by CnM, excluding other widely used social media platforms, such as Snapchat, and gaming platforms popular with children, such as Roblox.
“This leaves significant gaps in the protection of children who are encountering similarly harmful content in gaming environments, highlighting the urgent need for broader regulatory coverage to ensure online safety for children across all digital environments,” the charity said.
CyberSafeKids has also raised concerns that the code was vague in parts, without clear time frames for handling harmful content and complaints.
[ Keeping children safe online is not child’s playOpens in new window ]
Another issue was the decision to remove measures dealing with “recommender” algorithms – software that promotes and highlights content, pushing it to users – from the code. That algorithm can often be the source of the harmful content that the code is trying to protect users from.
When the draft code was published, the Irish Council for Civil Liberties said it was “dismayed” by the decision to remove measures to address toxic algorithms from the final code. ICCL senior fellow Dr Johnny Ryan described it as “a dangerous U-turn”.
How will it be enforced?
Coimisiún na Meán will monitor the platforms to ensure the rules are being applied, particularly around mechanisms to report illegal or infringing content. However, members of the public can make complaints to the commission, although they should complain to the platform first.
Campaigners have called on the commission to review the code’s efficacy within 12 to 24 months, and if it is failing to protect children, to implement stiffer penalties.
“Ultimately, we need a long-term strategy that ensures we are preparing and equipping children and young people for safe and enriching online experiences, and that adequate and robust legislation and resources are in place to effectively achieve this,” CyberSafeKids says.