More than a year and a half after the platform was used to organize a white supremacist rally in Charlottesville, Virginia, Facebook announced on Wednesday that it would no longer allow “praise, support and representation of white nationalism and separatism on Facebook and Instagram.” The ban comes months after Facebook made the decision this fall to ban white supremacist content, but not white nationalism.
“It’s clear that these concepts are deeply linked to organized hate groups and have no place on our services,” a company spokesperson outlined in a press release on Wednesday. According to the announcement, the policy change was the result of three months of discussions with civil society groups and academics who “confirmed that white nationalism and separatism cannot be meaningfully separated from white supremacy and organized hate groups.”
Facebook, in discussing the policy change, included examples, with redactions, of content that would be barred by the ban:
The policy changes, which will go into effect next week, will not include expressions of white or nationalist pride. Gray areas still permitted by the policy have raised alarm bells among some advocacy groups.
“We still have tremendous concerns about the policy,” says Keegan Hankes, an analyst at the Southern Poverty Law Center’s Intelligence Project, who says it was “ridiculous” that it so long for Facebook to acknowledge the connection between white separatism and white supremacy.
The policy change comes just a little over a week after Facebook faced a wave of scrutiny for its role in facilitating of the spread of the manifesto of a white supremacist who attacked mosques in Christchurch, New Zealand earlier this month, murdering 50 Muslims, and of the livestream video he recorded of the killings. The shooting called into question Facebook’s historical efforts to combat Islamic extremism while turning a blind eye to anti-Muslim hate speech.
According to Hankes, much of the content that radicalized the shooter may not be banned under the new policies. He pointed to the “Right Stuff Radio Listeners” group, which allows users to share content from a network of white supremacist podcasts, and whose content was shared by the killer.
“They’re a great example of an organization that is going to keep their rhetoric right under the line,” says Hankes, who added that it did not seem that the new policy would ban the promotion of white supremacist theories like “The Great Replacement.”
Muslim Advocates, an organization that has been critical of Facebook for allowing anti-Muslim bigotry on its platforms, also expressed skepticism.
“We need to know how Facebook will define white nationalist and white separatist content—for example, will it include expressions of anti-Muslim, anti-Black, anti-Jewish, anti-immigrant, and anti-LGBTQ sentiment, all underlying foundations of white nationalism?” asked Madihha Ahussain, a lawyer with the organization, in a statement provided to Mother Jones.
“As the horror in New Zealand has once again shown us, hate has deadly consequences,” said Ahussain.
“As we have seen with tragic attacks on houses of worship in Charleston, Pittsburgh, New Zealand, and elsewhere, there are real life consequences when social media networks provide platforms for violent white supremacists, allowing them to incubate, organize, and recruit new followers,” said Kristen Clarke, president and executive director of the Lawyers’ Committee for Civil Rights Under Law, in a statement. “While Facebook’s new policies are one step forward in the fight against white supremacist movements, much work remains to be done. Without proper implementation, policies will prove to be just empty words, and white supremacy will continue to proliferate across its platform.”
Color of Change, which has been working to combat racism on Facebook for years, also sounded a note of cautious optimism.
“We are glad to see the company’s leadership take this critical step forward,” said Color of Change president Rashad Robinson in a statement. “We look forward to continuing our work with Facebook to ensure that the platform’s content moderation guidelines and training properly support the updated policy and are informed by civil rights and racial justice organizations.”
Facebook representatives conceded that the company won’t be able to catch all bad actors on the site.
“Unfortunately, there will always be people who try to game our systems to spread hate. Our challenge is to stay ahead by continuing to improve our technologies, evolve our policies, and work with experts who can bolster our own efforts,” a spokesperson wrote. “We are deeply committed and will share updates as this process moves forward.”
This post has been updated with additional response to Facebook’s announcement.