On Monday afternoon, as one of Europe’s most famous landmarks was engulfed in flames from an accidental fire, a Twitter user with a MAGA hat clad profile picture and over 30,000 followers hit send on a post spreading baseless rumors of arson. “BREAKING: Workers at Notre Dame Cathedral in Paris saying fire was intentionally set,” read the tweet from Carmine Sabia. An hour and 562 retweets later, he sent another message attempting a backpedal. “There is no evidence if it is intentional or accidental. Im relaying what some workers have said,” the tweet read, offering no explanation of who the supposed workers were or if Sabia was in touch with them directly.
Neither the second tweet, nor Sabia’s lack of clear links to Paris or credentials as a traditional journalist, kept his misinformation from ripping across Twitter. An account popular for supporting the QAnon conspiracy quote tweeted Sabia’s first unsubstantiated claim to its over 100,000 followers, with an added flourish: “False flag confirmed.” By the next day, his tweet had amassed over 4,100 retweets and over 5,700 likes.
While Sabia lacked professional bona fides, he did have a verified Twitter account, and the little blue checkmark next to his user name that comes with it. That may have been all it took to give his tweet the veneer of credibility it needed to go viral.
That wouldn’t surprise researchers like Ben Decker, who runs the media and tech investigations consultancy Memetica. He says his work suggests that a retweet by a verified account is often the catalyst that helps an unsubstantiated or fringe claim spread like wildfire. “In 2018 research I did at the Shorenstein Center, we were able to recognize that the major amplification and tipping point for these claims was when a verified account started pushing it out,” Decker recalled, referring to his time as a fellow at Harvard’s Shorenstein Center on Media, Politics and Public Policy.
Decker recalled one specific example of a fake GIF edited to look like Emma Gonzalez, the Parkland shooting survivor and gun control advocate, was ripping up the Constitution. (The original video was from a Vogue photo shoot where she ripped a gun range target.) “As soon as James Woods and Stephen Baldwin tweeted about it, that added so much more to its half-life in regard to its exposure on the internet,” Decker explained.
Verified accounts are generally held by people in the public eye and are not anonymous, which creates some incentives for the holders of the accounts not to spread false information. Of course, there are exceptions. Verified accounts used by far right-wingers—including Jack Posobiec, James Woods, Stefan Molyneux, and occasionally Mike Cernovich—are routinely fast and loose with facts. Sometimes they’ll mention an unsubstantiated theory about a current event that tends to comport with their world views, before quickly stepping back to say that they are only “asking questions” by raising the claim.
As the fire burned on Monday, each of these accounts pushed theories about whether the fire was intentionally set. Their posts fed into a rapidly burgeoning Islamophobic narrative that Muslims had been behind the fire.
Even though they weren’t the only accounts pushing, without evidence, that the fire was potentially intentional, they were among a small number of verified accounts to do so. The Notre Dame fire isn’t their first foray into pushing flimsy, bad faith theories. Posobiec is most well known for his dalliance with Pizzagate, an absurd conspiracy about a Democratic child sex ring in the basement of D.C. pizzeria. Molyneux often spreads baseless “race science.” Woods was suspended from Twitter on at least one occasion for sharing a hoax meme.
Whatever Twitter’s reason is for maintaining the accounts’ verified status—and the company did not directly respond to Mother Jones‘s question about it—their verifications are likely helping give more credence to the misinformation they spread. While Twitter has explained in the past that verifications only signal that a public figure’s account is actually controlled by that person, regular Twitter users often invest the blue checkmark with other meanings.
“Verification is so ambiguous. A lot of times it’s meant to be a marker of identity, but also acts unofficially as a status marker and a trust marker,” said Becca Lewis, a PhD student researching technology and media at Stanford.
In 2017, the company appeared to have quietly backed away from its original position on verification by revoking the symbol from accounts maintained by prominent white nationalists, including Jason Kessler and Richard Spencer, without clarifying the decision. If the company decided those accounts were too racist to retain verification, other white nationalists like Molyneux, Lauren Southern, and Faith Goldy, who also spread Notre Dame misinformation, retain the status.*
In a statement to Mother Jones, the company said that it had “paused public submissions for verification” since November 2017 and was developing “a new authentication and verification program.”
While removing such accounts’ verification would risk right-wing backlash, Lewis says that’s not a legitimate excuse for the company to leave their status alone.
“I get the argument that there’s a potential PR element, but that is explicitly not censorship. People can still say whatever they want to without a blue check,” she said. “The check acts as an implicit endorsement by Twitter, and that’s all that removing it is taking away.”
Correction: The original version of this article stated that Southern’s verification had been revoked. It has not.