Last week, Twitter unveiled its latest piece of purported Silicon Valley innovation, Fleets—a feature that it copied from Instagram Stories, which the Facebook-owned company originally lifted from Snapchat.
The rollout allows Twitter users in the United States to share pictures and video that automatically delete after 24 hours. Almost immediately, social media researchers pointed out the format’s potential to serve as a vector for spreading disinformation and extremist content.
So far I have been able to fleet banned URLs, videos and disinformation about the election results. Wonder if extremist content would be possible in fleets. Welp this is a promising feature for disinformation and extremism reaearchers— Marc-André Argentino (@_MAArgentino) November 18, 2020
The gaps in moderation that Argentino and others pointed out—which Twitter’s terms of service already explicitly ban—aren’t Fleets’ biggest vulnerability—it’s the ephemeral structure of the format.
Unlike Instagram Stories, Fleets aren’t designed to be shared beyond an individual account’s followers. While a user could screenshot or screen record a Fleet and repost it themselves, that added step increases the friction of sharing the posts and reduces the ease of spreading disinformation in a way that Instagram Stories doesn’t. While that may slow the spread of bad information, repetition could help it go to large enough swathes of people to be damaging. It also means the messages will mostly remain inside closed circles, just as in Facebook groups.
Closed, private Facebook groups and Instagram stories have already helped spread dangerous disinformation. In Oregon this past summer, false rumors about Antifa starting wildfires in the state spread quickly, almost certainly coming from within private Facebook groups. The stories inspired vigilantes with assault rifles to mount neighborhood patrols and set up military-style checkpoints. While no one ended up hurt, it’s not hard to imagine how a situation like that could turn deadly.
In April, as I was reporting on QAnon’s growing appeal in alternative health and wellness influencer communities, I noticed that the conspiracy was being trafficked largely through Instagram Stories. Influencers would post lengthy video monologues discussing false claims about how blood was being harvested from children kept in underground tunnels for elite liberal pedophiles. As I flipped through wellness Instagram accounts, I could watch QAnon content gain traction with other influencers, as they reposted Stories pushing Q. While the Stories were available to tens or hundreds of thousands of followers, the posts deleted themselves within 24 hours, making them difficult to document or debunk. And it was impossible to gauge how often they were being shared via private direct messages.
By spotting and bringing to light false or dangerous posts, journalists have become a de facto free content moderation service for social media platforms. While this is a problematic dynamic, journalists can’t even do this properly when the posts in question are made in ephemeral formats. Tweets, for all of their issues, are searchable and remain on the platform by default. Fleets will not.
Peter W. Singer a senior fellow at the think tank New America, and the author of Like War, a book on how social media has been weaponized in politics, says he’s been worried about the disinformation potential of Fleets since Twitter started testing the feature in Brazil in March. “So much of their system in actuality relies not on their own AI and content moderators, but on fellow users and researchers to flag violators. With Fleets, researchers won’t be able to see and track as much,” he told me via a Twitter direct message.
These aren’t problems that hiring oceans of underpaid, overworked contract moderators will ever solve—there will never be enough moderators to get ahead of offending content. By creating Fleets, the company has introduced a structural problem by creating a space where people can post misinformation and extremist content at a faster rate than moderators can ever track, and where it won’t be easily viewable by concerned users. Twitter will always be at least two steps behind.