In an email statement, a YouTube spokesperson says that the company has made “significant progress in our work to combat hate speech on YouTube since the tragic attack at Christchurch.” Citing 2019’s strengthened hate speech policy, the spokesperson says that there has been a “5x spike in the number of hate videos removed from YouTube.” YouTube has also altered its recommendation system to “limit the spread of borderline content.”

YouTube says that of the 1.8 million channels terminated for violating their policies last quarter, 54,000 were for hate speech—the most ever. YouTube also removed more than 9,000 channels and 200,000 videos for violating rules against promoting violent extremism. In addition to Molyneux, YouTube’s June bans included David Duke and Richard Spencer. (The Christchurch terrorist donated to the National Policy Institute, which Spencer runs.)

“It’s clear that the core of the business model has an impact on allowing this content to grow and thrive,” says Lewis. “They’ve tweaked their algorithm, they’ve kicked some people off the platform, but they haven’t addressed that underlying issue.”

Online culture does not begin and end with YouTube or anywhere else, by design. Fundamental to the social media business model is cross-platform sharing. “YouTube isn’t just a place where people go for entertainment; they get sucked into these communities. Those allow you to participate via comment, sure, but also by making donations and boosting the content in other places,” says Joan Donovan, research director of Harvard University’s Shorenstein Center on Media, Politics, and Public Policy. According to the New Zealand government’s report, the Christchurch terrorist regularly shared far-right Reddit posts, Wikipedia pages, and YouTube videos, including in an unnamed gaming site chat.

The Christchurch mosque terrorist also followed and posted on several white nationalist Facebook groups, sometimes making threatening comments about immigrants and minorities. According to the report authors who interviewed him, “the individual did not accept that his comments would have been of concern to counter-terrorism agencies. He thought this because of the very large number of similar comments that can be found on the internet.” (At the same time, he did take steps to minimize his digital footprint, including deleting emails and removing his computer’s hard drive.)

Reposting or proselytizing white supremacist without context or warning, says Donovan, paves a frictionless road for the spread of fringe ideas. “We have to look at how these platforms provide the capacity for broadcast and for scale that, unfortunately, have now started to serve negative ends,” she says.

YouTube’s business incentives inevitably stymie that sort of transparency. There aren’t great ways for outside experts to assess or compare techniques for minimizing the spread of extremism cross-platform. They often must rely instead on reports put out by the businesses about their own platforms. Daniel Kelley, associate director of the Anti-Defamation League’s Center for Technology and Society, says that while YouTube reports an increase in extremist content takedowns, the measure doesn’t speak to its past or current prevalence. Researchers outside the company don’t know how the recommendation algorithm worked before, how it changed, how it works now, and what the effect is. And they don’t know how “borderline content” is defined—an important point considering that many argue it continues to be prevalent across YouTube, Facebook, and elsewhere.

“It’s hard to say whether their effort has paid off,” says Kelley. “We don’t have any information on whether it’s really working or not.” The ADL has consulted with YouTube, however Kelley says he hasn’t seen any documents on how they define extremism or train content moderators on it.

A real reckoning over the spread of extremist content has incentivized big tech to put big money on finding solutions. Throwing moderation at the problem appears effective. How many banned YouTubers have withered away in obscurity? But moderation doesn’t address the ways in which the foundations of social media as a business—creating influencers, cross-platform sharing, and black-box policies—are also integral factors in perpetuating hate online.

Many of the YouTube links the Christchurch shooter shared have been removed for breaching YouTube’s moderation policies. The networks of people and ideologies engineered through them and through other social media persist.


More Great WIRED Stories

You May Also Like

Sky is DOWN across East and Central Scotland, leaving users unable to access the internet

It’s one of the most popular network providers in the UK, but…

Northern Lights may be visible TONIGHT as a ‘double punch’ of solar storms smash into Earth

A stunning display of the Northern Lights could be in store for…

May’s Supermoon total eclipse has just happened – here’s some stunning snapshots

A SUPER Blood Moon phenomenon is currently taking place and is visible…

A World Without iPods

Wow, my 401(k) is really taking a beating. Glad I put all…