Over the last two years, the relationship between civil rights groups and social media goliath Facebook has, like a roller coaster, been full of ups, plunging downs, unexpected turns, and dizzying about-faces. Advocates have experienced highs of optimism, believing Facebook would take the rights and experiences of minority communities seriously, only to plummet to valleys of disillusion when the company would later betray that trust.
Today, Facebook released a comprehensive civil rights report two years in the making. When it was conceived, the project embodied the activists’ hope for the company’s potential for progress. Now, as it is released, the relationship between racial justice advocates and the tech giant have reached a new low, with Facebook under fire for allowing recent false and hateful statements from President Donald Trump to remain on the platform. The decision gutted the company’s own progress and undermined confidence that the company will ever take hate speech, voter suppression, or other civil rights priorities seriously.
The audit acknowledges this “seesaw of progress and setbacks” in an introduction, written by Laura Murphy, the outside civil rights expert who led the project. Despite notable areas of improvement, Murphy, who formerly led the ACLU’s legislative office in Washington, lamented that Facebook has still not fully embraced a pro-civil rights outlook. “With each success the Auditors became more hopeful that Facebook would develop a more coherent and positive plan of action that demonstrated, in word and deed, the company’s commitment to civil rights. Unfortunately, in our view Facebook’s approach to civil rights remains too reactive and piecemeal.”
“While the audit process has been meaningful, and has led to some significant improvements in the platform, we have also watched the company make painful decisions over the last nine months with real world consequences that are serious setbacks for civil rights,” Murphy writes in the report, an assessment that echoes civil rights advocates’ own complaints.
The past several weeks have illustrated this very shift. When Trump falsely tweeted that voting by mail would unleash massive fraud, Twitter fact-checked the tweet. By contrast, Facebook CEO Zuckerberg went on Fox News to condemn Twitter, all but announcing that his company would protect messages spreading such disinformation. When Trump reacted to protests over the death of George Floyd by tweeting the words of a racist sheriff from the 1960—“when the looting starts, the shooting starts”—Twitter tagged the post for glorifying violence. Zuckerberg allowed the same words to stand on Facebook, where the president’s post generated more than 250,000 likes and tens of thousands of comments and shares.
The danger of Trump’s posts is obvious just by looking at the comments that unfurl beneath them. Supporters respond to allegations of mail-in ballot fraud by promising to vote in person despite the pandemic. When Trump claimed the US had the lowest death rate from coronavirus, supporters responded by writing the coronavirus is a hoax. But the audit itself found that allowing Trump’s posts to stay on the platform has even more pervasive effects. “After the company publicly left up the looting and shooting post, more than five political and merchandise ads have run on Facebook sending the same dangerous message that ‘looters’ and ‘ANTIFA terrorists’ can or should be shot by armed citizens,” the report notes, indicating that the company should have detected and removed the ads before they “collectively received more than two hundred thousand impressions.”
Advocates’ frustration at these latest developments—just months ahead of the 2020 elections and in the midst of a protest movement sparked by police brutality— led them to organize an advertising boycott of Facebook. The effort, dubbed Stop Hate for Profit, has prompted hundreds of companies to pull ads from the platform, which earns its billions—$77 billion in 2019—from ad revenue.
Facebook’s agreement to commission the audit marked the culmination of years of civil rights organizations working cooperatively with Facebook to address inequities on the platform, an inside strategy featuring multiple meetings governed by non-disclosure agreements where advocates worked to educate executives and appeal to their moral sensibilities. But the final report lands just as many of these same activists have largely abandoned the inside track, instead pushing outside action designed not to appeal to the company’s conscience, but to its bottom line.
Throughout the report’s 89 pages, the same theme comes through again and again. Small steps have been taken, but they need to be made systemic rather than piecemeal, mandatory rather than voluntary, broad rather than narrow. While the audit could serve as a roadmap to building a more responsible and inclusive platform, for that to happen, as both activists and the auditors explain, the company would have to build a corps of civil rights experts empowered to make decisions that CEO and co-founder Mark Zuckerberg would abide by. That hasn’t happened.
Advocates agree with that assessment. “They’ve fallen short of the bold action we need,” Steven Renderos, executive director of MediaJustice, a group on the frontlines with Facebook for years. “People of color are not better off than they were two years ago when the civil rights audit started. Facebook has chosen to adopt incremental changes that have been insufficient for big problems like the spread of disinformation, hateful activities and the criminalization of people of color on Facebook.”
Renderos and other advocates highlight the company’s faltering enforcement—and Zuckerberg’s determinative role—when it comes to one Facebook user in particular: Donald Trump. “The positive changes Facebook has taken on like its voter suppression policies have been undermined by inconsistent applications of those rules towards Donald Trump,” he says. “Those are decisions that are being made at the senior leadership levels at Facebook and point to a more significant problem, the lack of robust and permanent civil rights infrastructure at Facebook that can continue to tackle these issues moving forward.”
As Mother Jones reported last year, Facebook resisted advocates’ efforts to address discrimination issues on the platform or a long time, including calls for such an audit. But after a series of high profile public relations crises—the Cambridge Analytica scandal revealing improperly obtained user data was used to target voters, Russia’s use of the platform to help elect Trump, the product’s role in sparking a genocide in Myanmar—Facebook agreed to a civil rights audit in 2018.
But just a few months later, the New York Times reported that Facebook had hired a Republican opposition research firm to discredit its critics, including Color of Change, one civil rights organizations that had lobbied for the audit. The firm, Definers Public Affairs, had sought to tie the non-profit to billionaire philanthropist George Soros, whose status as a rightwing boogeyman often carries the tinge of anti-Semitism. Facebook had not only gone after a civil rights group, but it had done so by fueling the type of bigotry that civil rights groups trying to get off the platform.
In the public fallout of the Definers revelation, Facebook recommitted to the audit. COO Sheryl Sandberg became more engaged and optimism climbed. In September of 2019, Facebook and Color of Change together organized a summit on discrimination in technology, a new peak in collaboration between Facebook and the civil rights community. But two days before it began, Facebook announced a shocking policy: it would allow politicians to lie in posts and advertisements. Without warning, the company had driven a truck-size loophole through its efforts to contain hate, voter suppression, and misinformation. “Facebook is saying: If you are a politician who wishes to peddle in lies, distortion and not-so-subtle racial appeals, welcome to our platform,” Vanita Gupta, who runs the Leadership Conference on Civil and Human Rights, wrote at the time. This was the beginning of a new phase in the advocates’ struggle with Facebook, as the company began to walk back its own policies in order to appease the president and his allies. The audit criticizes the approach, assailing the decision to allow Trump’s recent incendiary posts to stay online as “vexing and heartbreaking” and representing “significant setbacks for civil rights.”
Not all of Facebook’s failures are explained by a deference to Trump, as there are other areas in which the company has simply refused to make significant improvements. One of those is anti-Muslim activity, which civil rights advocates believe has repeatedly fallen through the cracks. “Facebook is enabling violence and genocide against Muslims,” said Farhana Khera, executive director of Muslim Advocates, a group has been among the earliest critics of Facebook and a key driver of the audit report, in response to the final report. Among the group’s unaddressed demands is the need for a policy banning the use of Facebook to organize events intended to terrorize Muslims and other at-risk communities. “This audit is illuminating but it is ultimately meaningless if Facebook does not agree to take dramatic and substantial steps to address the many failures outlined in the report,” she says.
Hate speech and harassment also remain an issue, as Facebook has increasingly relied on artificial intelligence and predictive technology to remove hate speech. While the report notes that in March 2020, 89 percent of removals “were identified by…technology before users had to report it,” Black users have found their discussions of racism have been taken down for including phrases like “white men”—a sign that Facebook’s predictive technology censors communities of color more than their harassers. Other modest steps by the company to improve content moderation and hone its definition of hate speech have not reached Facebook users who experience harassment and hate, such as Black Lives Matter activists, who share stories of constant harassment and violent threats that remain on the site. The problem is illustrated in Facebook’s refusal to broadly define white nationalism or harassing content, which allows hateful users to get the same point across by tweaking works. In one example flagged by the audit, Facebook’s ban on “white nationalism” and “white separatism” only applies to the use of those precise words, and therefore “does not prohibit content that explicitly espouses the very same ideology without using those exact phrases.”
As reporters poured over an embargoed copy of the report Tuesday night, many of the advocates who pushed for the audit were filling their inboxes with another round of statements expressing disappointment toward Facebook, this time following a meeting they had with Zuckerberg about the demands of the boycott. “It was abundantly clear in our meeting today that Mark Zuckerberg and the Facebook team is not yet ready to address the vitriolic hate on their platform,” the action’s leaders announced Tuesday night. “Zuckerberg offered the same old defense of white supremacist, antisemitic, islamophobic and other hateful groups on Facebook.” At Facebook, reform can only be as effective as its leader allows it to be.