As a Google representative championed the company’s work in going “beyond what the U.S. requires” in cutting hateful content from its platforms, users flooded the comments section of a YouTube livestream showing the congressional hearing where the comment was made with white nationalist, supremacist, and otherwise bigoted posts.
The hate speech was so rampant that YouTube, a Google subsidiary, eventually shut the live comments off.
Hate speech has no place on YouTube. We’ve invested heavily in teams and technology dedicated to removing hateful comments / videos. Due to the presence of hateful comments, we disabled comments on the livestream of today’s House Judiciary Committee hearing.
— YouTubeInsider (@YouTubeInsider) April 9, 2019
Facebook’s livestream of the hearing had fewer such comments, but that platform has also struggled to reign in white nationalist content. In the past week, it initially defended its decision to keep a video by prominent Canadian white nationalist Faith Goldy before eventually taking it down. After that removal, journalists like Right Wing Watch’s Jared Holt quickly found other examples of white nationalist videos still on the site, including one by far-right activist Lauren Southern pushing the myth of “white genocide.”
During the hearing, one witness, Kristen Clarke, who is the president of the Lawyers’ Committee for Civil Rights Under Law, pointed out two further examples of white nationalist pages—the “Nationalist Agenda” and “It’s Okay To Be White”—that Facebook had yet to take down.
Facebook and Google’s descriptions of their own work during the House Judiciary committee hearing painted a completely different picture.
“I would like to be clear, there is no place for terrorism or hate on Facebook. We remove any content that incites violence, bullies, harasses, or threatens others and that’s why we’ve had long-standing policies against terrorism and hate and why we’ve invested in safety and security in the past few years,” Neil Potts, a director of public policy at Facebook, told lawmakers during the hearing.
“Over the past two years, we’ve invested heavily in machines and people to quickly identify and remove content that violates our policies. Hate speech and violent extremism have no place on YouTube. We believe we have developed a responsible approach to address the evolving and complex issues that manifest on our platform,” said Alexandria Walden, Google’s counsel for free expression and human rights.
Given how easy it is to find exceptions to these claims, Facebook and Google’s descriptions of what’s happening on their platforms seems more aspirational than descriptive.
The exceptions identified at the hearing aren’t just exceptions and they aren’t things that the platforms aren’t aware of. They’re easily accessible and pervasive and, in some cases, have already been flagged to companies.
Facebook still hosts pages boosting Islamaphobic groups, including the Christian Action Network and Jihad Watch, and groups that have elevated white nationalists, like VDare, which have been banned from other platforms including Amazon’s philanthropy service, AmazonSmile, and PayPal. Meanwhile, Facebook still lets them use its platform to fundraise, unchecked.
Companies have an already existing model for how to address extremist content in how they’ve handled pro-Islamic terror content. While such material occasionally pops up on mainstream social networks, it is far less apparent than white nationalism and white supremacy. While Facebook has vowed to and taken some steps try to stem racist content, YouTube has largely declined to take a firmer stance against white nationalism on its service, let alone take sustained action.