In her recent quarterly letter, Susan Wojcicki, the chief executive of YouTube wrote: “A commitment to openness is not easy. It sometimes means leaving up content that is outside the mainstream, controversial or even offensive. But I believe that hearing a broad range of perspectives ultimately makes us a stronger and more informed society, even if we disagree with some of those views.”
These pronouncements from YouTube often seem to shirk the consequences of the content. It’s not just about the rights and wrongs of what material is on the platform, it’s also about what happens when that content is consumed. When Wojcicki talks about a “broad range of perspectives”, it is as if people browsing videos on YouTube are encountering a wide range of differing opinions from every political persuasion, rather than a wide range of opinions within a particular political world view. YouTube is the filter bubble, weaponised.
A paper published last month by researchers at universities in Brazil, Switzerland and the US, titled Auditing Radicalisation Pathways on YouTube, offers a compelling insight into the idea that people exploring the thin end of the wedge of certain content can be nudged towards more extreme material.
Algorithms are initially engineered by humans but, as they develop, there can be unforeseen consequences as they learn, morph and personalise
“Non-profits and the media claim there is a radicalisation pipeline on YouTube,” the study’s authors wrote, “Its content creators would sponsor fringe ideas, and its recommender system would steer users towards edgier content. Yet, the supporting evidence for this claim is mostly anecdotal, and there are no proper measurements of the influence of YouTube’s recommender system. In this work, we conduct a large-scale audit of user radicalisation on YouTube.” The study analysed 331,849 videos on 360 channels and processed 79 million comments, with an additional focus on YouTube’s recommendation algorithm, examining two million recommendations between May and July 2019.
Auditing the web
Dividing videos into three categories – the Intellectual Dark Web (IDW, a nebulous collection of thinkers that can include the likes of Jordan Peterson, Joe Rogan, Ben Shapiro, Sam Harris, and so on), Alt-lite (right-wingers who claim to reject white nationalism), and Alt-right (far-right white nationalists) – the authors wrote of “auditing the web”. This is in part related to reverse engineering the input-output relationships of algorithms to determine whether they are discriminatory, destructive or bear troublesome outcomes. Algorithms are initially engineered by humans but, as they develop, there can be unforeseen consequences as they learn, morph and personalise. In this study, the authors reference the algorithmic auditing that measured discrimination on Airbnb.
The rise of contrarians has coalesced with a media platform, YouTube, whose revenue model is rooted in competing for attention
In analysing the intersection between users in different channels and communities, they found that users who consume content in the communities the study is interested in are more engaged, and found that “the IDW, the Alt-lite, and the Alt-right increasingly share the same commenting user base”. In order to track user migration from milder content to more extreme content, the study looked at users who did not comment on Alt-right content in a given year and tracked their subsequent activity in the channels that were being studied.
Roughly 40 per cent of commenting users on Alt-right content can be traced back to users who only commented on Alt-lite or IDW videos in the past. “Moreover, we can observe that, consistently, users who consumed Alt-lite or IDW content in a given year, go on to become a significant fraction of the Alt-right user base in the following year.” On YouTube’s recommendation algorithm, which even the casual observer can see often pushes edgier content, the authors wrote: “Overall, we find that, particularly in the channel recommender system, it is easy to navigate from the IDW to the Alt-lite (and vice-versa), and it is possible to find Alt-right channels.”
‘Radicalisation pipeline’
Ultimately, the study “resonates with the narrative that there is a radicalisation pipeline”, and “regardless of the role of the recommender system”, the authors wrote, “user radicalisation on YouTube seems to be significant”. This matters, of course, not just because of the rise of the far-right in various parts of the world, but also because of the links many mass-shooters in the US and elsewhere have to far-right ideologies fostered online, of which misogyny is often the canary in the mine. Such a pipeline also offers a playbook for radicalising people.
Online communities are where large swathes of people are becoming politicised and informed and indoctrinated. This is the first time in human history that political indoctrination is occurring in this manner at this scale. The rise of contrarians has coalesced with a media platform, YouTube, whose revenue model is rooted in competing for attention, and getting people to spend as much time as possible watching videos. Each video it recommends to a viewer, therefore, needs to be more compelling than the last.
Being aware of radicalisation pathways is not about policing the development of thought, but about being cognisant that increasingly the ideas we are consuming are not chosen by us but for us, and often by algorithms that know more about what we will find compelling than our own minds do. While YouTube may claim that some of its offensive content is about a diversity of ideas, it is increasingly clear that the platform is not engineered to challenge our worldviews, but to lead us further into a more densely packed and homogenised thoughtscape. We show interest in a tree, it leads us into the forest. This is less about enlightening us, and more about directing us towards darkness.