Originally Posted on WIRED
GIVEN THAT ISIS and other terrorist organizations have proven adept at using social media to disseminate propaganda and incite fear, it seems obvious that platforms like Facebook and Twitter would aggressively and mercilessly delete such content and ban those who post it.
It may seem equally obvious that those companies would move quickly to do just that when presidential candidates appear to call for them to help out and as US Representative Joe Barton asks the Federal Communications Commission, “Isn’t there something we can do under existing law to shut those Internet sites down?” But it’s not that simple, and social media platforms have grappled with the issue in some ways since at least the days when Al Qaeda affiliates started uploading videos to YouTube.
The problem lies in the global nature of social media, the reliance upon self-policing by users to identify objectionable content, and the fact that many of those banned simply open a new account and continue posting their hatred. A blanket policy of banning anything that might be seen as inciting violence also could lead to questions of censorship, because one person’s hateful propaganda could be another’s free speech. That’s not to say companies like Facebook and Twitter aren’t taking this seriously and trying to draw a distinction between the two. But it’s not as simple as you might think.
‘No Place for Terrorists’
Facebook says any profile, page, or group related to a terrorist organization is shut down and any content celebrating terrorism is removed. “There is no place for terrorists on Facebook,” says Facebook spokesman Andrew Souvall. “We work aggressively to ensure that we do not have terrorists or terror groups using the site, and we also remove any content that praises or supports terrorism.”
It seems to broadly work. Facebook has deleted posts and blocked accounts in such a way that ISIS-related newsletters, videos, and photos don’t seem to crop up as much as elsewhere on the web, says Steve Stalinsky, executive director of Middle East Media Research. “Of all the companies, they’re the leader and the best at removing content,” he says.
Until last fall, Twitter had largely taken a more detached stance on ISIS-related content. It began taking a more aggressive approach after videos and images of journalist James Foley’s beheading spread on social media. Brookings Institute researcher J.M. Berger says the increase in suspensions of Twitter accounts seen in recent months has had a measurable effect. While an active social network typically grows over time, Berger says that the suspensions on Twitter have helped to keep the size of the network “roughly flat.” Moreover, users whose accounts are repeatedly suspended come back with new accounts with fewer followers.
“The good news is that this limits the reach of their propaganda and recruiting, and makes it harder for ISIS to accomplish its goals online,” Berger says.
But Twitter’s efforts don’t satisfy all critics, who say it is a primary tool for ISIS to spread its message and can even be used to recruit new members. “It’s not that Twitter isn’t removing accounts,” Stalinsky says. The company does suspend high profile ones, but the people who run shut down accounts quickly crop back up with new ones, he says. “If they were serious, they’d use the proper technology to get them to not come back.”
Twitter declined to respond to specific questions from WIRED about how it handles ISIS propaganda, but the company told The Washington Post earlier this year that “Twitter continues to strongly support freedom of expression and diverse perspectives… but it also has clear rules governing what is permissible.” The company did tell WIRED that its publicly stated policies prohibit certain content: “Users may not make threats of violence or promote violence, including threatening or promoting terrorism.”
Propaganda or Political Speech
But the challenge for sites like Facebook and Twitter goes beyond tracking down content that promotes terrorism. It also requires defining “promoting terrorism.” In a sense, the two platforms are global communities, each engaged in a constant process of determining community norms as the use of the platforms evolves.
Facebook has long been a “place” where users could expect to have content that didn’t fit in with certain community standards removed. Porn and nudity, for instance, are strictly prohibited. Twitter, on the other hand, has long sought to remain more open, although it has its own guidelines for when content on the platform goes too far as well.
“Twitter stands for freedom of expression,” founder and chief executive Jack Dorsey said earlier this year, “and we will not rest until that is recognized as a global fundamental human right.” But how does that fundamental right square with propaganda so closely tied to horrific violence? Some critics believe the stakes are too high not to err on the side of aggressive removal.
“We’re seeing a weaponization of these platforms by terrorists,” says Mark Wallace, the chief executive officer of the Counter Extremism Project and former United States Ambassador to the UN under President George W. Bush. He likens graphic ISIS videos or photographs to child pornography, which he says “would be removed expeditiously.”
But free speech activists worry that if government officials encourage policing certain kinds of speech that veers uncomfortably close to censorship. “I think we have to ask if that’s the appropriate response in a democracy,” says Jillian York, the director of the Electronic Frontier Foundation’s International Freedom of Expression.
“While it’s true that companies legally can restrict speech as they see fit, it doesn’t mean that it’s good for society to have the companies that host most of our everyday speech taking on that kind of power.”