Shortly before walking into a synagogue near San Diego on Saturday and embarking on a shooting spree, the alleged gunman posted a manifesto on the message board 8chan explaining his bigoted ideology and motivations for the attack, which left one person dead and three injured. His post and manifesto were constructed to look just like the post and manifesto the alleged New Zealand shooter put on 8chan before killing 50 at two mosques in Christchurch in March.
“I’ve only been lurking for a year and a half, yet what I’ve learned here is priceless,” the San Diego man wrote on his 8chan post above links to his manifesto and his Facebook page, where he planned to livestream the attack. In the manifesto, he cited the New Zealand shooter as an inspiration, who himself had written that he spent time “lurking” on 8chan as part of his process of radicalization into white supremacy.
After both shootings, other 8chan posters cheered on and praised the attackers. The message board’s integral role in both deadly shootings has led to growing calls to ban the site entirely, and debates over free speech concerns.
“If I were the CEO of a company that’s responsible for enabling 8chan to exist, I would take it down,” said Dipayan Ghosh of Harvard Shorenstein Center’s Platform Accountability Project, referring to domain hosting services and security providers like Cloudflare. “If I were a monarch, who had full power over the information and media ecosystem, I would probably take this down as well. I know that it sets a difficult precedent in how we should think about free speech, but I think it’s time that we start to reassess free speech.”
Before it became known for motivating shootings, 8chan had already gained a toxic reputation. Its political board is a wasteland of bigotry targeting a range of social groups. White nationalist groups like Identity Evropa have openly recruited there. This article brings a not insignificant chance that I’ll be subjected to a cascade of internet harassment and threats from its users.
Following the massacre in Christchurch, internet service providers in Australia and New Zealand took it upon themselves to block the site something that their American equivalents have never done. Both Australia and New Zealand have stricter internet censorship rules than the U.S.
Ben Decker, who runs the media and tech investigations consultancy Memetica, pointed out that the government might not have a clear way to handle sites like 8chan that facilitate egregious crimes, but aren’t created for that specific purpose in the way child pornography sites are, which law enforcement always takes down.
“I don’t think we have an internet governance model that allows us to make a decision about that. We never define what is okay and not okay as a shared standard across ISPs. Until we can have a baseline of what’s acceptable and what’s not, I’m not sure [about banning 8chan],” Decker said.
Aside from concerns about barriers to potential action against 8chan, if there were a physical version of the site, it’s unlikely that there would be much debate over what to do. Imagine a warehouse where a group of loosely affiliated people were meeting, teaching each other about white supremacy, evangelizing Nazism, and every so often going out and attempting mass murders, arson, and other hate crimes. People would think, “We probably need to do something about this warehouse.” It’s very unlikely that anyone would say “Even though these guys sometimes kill, it’s really important that we give them a place to keep hanging out and organizing.”
As another thought experiment: picture the same warehouse, replace the neo-Nazi leanings with pro-ISIS or al-Qaeda ones. The FBI would have raided it and arrested its members without any debate.
“We would take down platforms that necessitate facilitate child pornography,” said Ghosh, pointing to the darknet child porn site Playpen, which was shut down in 2015 by the FBI. “Murder and terrorism are right in that alley.”
But digital spaces are different from physical ones and sometimes warrant being treated differently.
Real life spaces don’t offer easy anonymity and are limited by physical restrictions. Shutting them down can be more effective than shutting down digital spaces of hate, and can have more long lasting effects. The gang might have a hard time finding a new discreet warehouse, especially when the community and law enforcement are on to the fact that dangerous white supremacists like to hang out in warehouses. But on the internet, shutting down one community likely means that at least some of its members will easily go to another one that’s often more extreme than the place they just left, perhaps without anyone noticing. For example, according to a joint analysis by the Anti-Defamation League and the National Contagion Research Institute, user bans on Twitter often correlate with spikes in new membership on the “free-speech alternative” to it, Gab, which has become a haven for the alt-right and other hate groups.
The story of how 8chan came to be is itself a tale of an attempt to crack down on an isolated online community, leading to a new, even worse gathering place. After some 4chan users felt that moderation was becoming too strict during the misogynist Gamergate harassment campaign, they left to launch 8chan, which effectively became a distilled version of the worst parts of 4chan. Many 8chan migrants came from one of 4chan’s worst boards, /pol/, which was itself originally created in an attempt to quarantine racism from other boards of 4chan.
Banning 8chan could play directly into the hands of some of the site’s users who think white-nationalist movements, with narratives of victimhood at their core, grow faster when oppressed. Banning 8chan would help them drive the narrative that the board’s users actually are being targeted.
There are potential avenues for limiting 8chan’s effectiveness outside of an outright ban.
Decker, a former internet extremism researcher at Harvard’s Shorenstein Center, thinks that tech companies like Facebook, Twitter, and Reddit could better police harmful 8chan links on their platforms. He suggested they consider automatically flagging any 8chan content linked from their sites for moderators to review, stemming the spread of hateful posts.
“It’s not about taking away people’s right to communicate on the internet. It’s about containing speech that is harmful to the public square, that can be tied to mass casualty violence,” Decker said. “We need to think about how we can quarantine these types of communities, so they’re prevented from harming and radicalizing others who go off and take action in the real world.”
After the San Diego shooting, Brianna Wu, a software developer who was targeted and harassed during Gamergate, advocated that law enforcement curb 8chan by investigating and prosecuting common internet crimes taking place on the site.
“If you want to stop future mass shootings planned on #8chan, the course of action is very simple,” Wu wrote. “Get a warrant, and START PROSECUTING all the blatantly illegal actions taken there. It is a crime to post stolen credit card information. It is a crime to host child pornography.”
There are also limits to what taking action online can do. Banning 8chan won’t do much about hate fomenting in the real world, or stop bigotries that existed long before the internet.