In the age of fake news and radicalization, the real enemy is not content itself. It’s the algorithm pushing that content up to the top of users’ ‘recommended’ lists.
That’s according to software engineer Guillaume Chaslot. He should know: he used to work at YouTube and helped build the tech giant’s recommendation algorithm.
“We need to understand the difference between freedom of speech and freedom of reach. You’re free to say whatever you want to say – but there shouldn’t be freedom to amplify this,” Chaslot told a conference ahead of the Mozilla Festival weekend in London.
He added that extreme content in itself is not problematic. He is actually in favor of having as much content as possible on current platforms.
Google’s YouTube has recently come in for criticism for its poor management of potentially harmful content. As a result, it has recently made the removal of videos that violate its policy its number one priority.
At the same time, the platform has to perform a delicate balancing act between content moderation and freedom of speech. In a quarterly letter to YouTubers, CEO Susan Wojcicki wrote: “A commitment to openness is not easy. It sometimes means leaving up content that is outside the mainstream, controversial or even offensive.”
The debate about content moderation is not new. But for Chaslot, the issue now is that companies not only publish content, but apply algorithms to it.
“Algorithms are built to boost watch time, and that typically happens through viewing increasingly radical videos,” he told ZDNet.
“Someone could be completely radicalized through viewing hours of YouTube videos on end – and from the perspective of the algorithm, that’s actually jackpot.”
When he worked at YouTube, he said he raised this issue and suggested including more diverse videos in the platform’s recommendation algorithm.
He was met with skepticism from management, so he left the company and started digging to find out where exactly the algorithm would lead him.
This coincided with the 2016 presidential election in the USA, and his research confirmed that YouTube’s algorithm was pushing users to watch more radical videos.
His results were published last year, with the disclaimer that they could only be partial, since the company withholds from the public any data about which content its algorithm promotes.
“We don’t know how much YouTube promotes radical ideas like terrorism,” said Chaslot. “They are doing better but in the course of history, we have no idea, and we will probably never know.”
YouTube has ramped up efforts to change its recommendation algorithm. This year, it launched a trial in the UK to reduce the spread of what it calls “borderline content” after a similar trial in the US halved the views of such content from recommendation, according to the company.
But this is not enough, according to Chaslot. He added that it is now necessary to create efficient legislation to tackle the issue.
“It is similar to when we realized that tobacco was killing people,” he said. “First, we needed the scientific evidence showing that tobacco is harmful – and now, we need the scientific evidence that YouTube is promoting extremism.”
Only once this evidence is produced can there be growing public awareness of the issue, before legislation is introduced, he said. “We made rules to stop people from smoking in public places, not from smoking altogether,” he pointed out. “Something similar should be done with content.”
But with current laws, nothing forces online platforms to share the data that would enable scientific research in the first place.
As a result, the world is governed by secret algorithms that decide on 70% of what viewers see on YouTube, and 100% of what they read on Facebook, he argued. Euphemistically, Chaslot described this as “a bit crazy”.
ZDNet has contacted YouTube for comment and will update this article if it receives a response.