YouTube’s algorithm still amplifies violent videos, hateful content and misinformation despite the company’s efforts to limit the reach of such videos, according to a study published this week.
The Mozilla Foundation, a software nonprofit that is outspoken on privacy issues, conducted the 10-month investigation, which found that 71 percent of all videos flagged by volunteers as disturbing were recommended by YouTube’s algorithm.
The study, which Mozilla described as “the largest-ever crowdsourced investigation into YouTube’s algorithm,” used data volunteered by users who installed a Mozilla extension on their web browser that tracked their YouTube usage and allowed them to report potentially problematic videos.
The researchers could then go back and see if the flagged videos were suggested by the algorithm or whether the user found it on their own.
More than 37,000 users from 91 countries installed the extension, and the volunteers flagged 3,362 “regrettable videos” between July 2020 and May 2021.
Mozilla then brought in 41 researchers from the University of Exeter to review the flagged videos and determine if they might violate YouTube’s Community Guidelines.
Of the more than 3,300 flagged videos, 71 percent were suggested by the algorithm, according to the study.
Among them were a sexualized parody of “Toy Story” and an election video that falsely claimed Microsoft founder Bill Gates hired students involved with the Black Lives Matter movement to count ballots in battleground states.
Others included conspiracies about 9/11 and the COVID-19 pandemic, as well as the promotion of white supremacy, according to the report.
YouTube later removed 200 videos that participants flagged, which equates to about 9 percent.
But the videos had already accumulated more than 160 million views before they were taken down, according to Mozilla.
A spokesperson for YouTube said it’s not clear how the study defined objectionable videos and questioned some of the findings.
“But it’s hard for us to draw any conclusions from this report, as they never define what ‘regretted’ means and only share a few videos, not the entire data set,” it continued.
“Our public data shows that consumption of recommended borderline content is significantly below 1% and only 0.16-0.18% of all views on YouTube come from violative content,” the statement added.
“We’ve introduced over 30 changes to our recommendation system in the past year, and we’re always working to improve the experience on YouTube.”