Can Twitter Censor Terrorists and Trolls Without Silencing Free Speech? Social media, once a bastion of open discourse, is under mounting pressure to define and remove "hate'' speech.
By Steve Tobak Edited by Dan Bova
Opinions expressed by Entrepreneur contributors are their own.
Twitter is taking aggressive steps to rid the site of terrorists and trolls. The platform has suspended hundreds of thousands of accounts linked to radical extremism, clarified its rules on abuse, and rolled out new tools to help users filter their feeds and improve their experience. Sounds like a no brainer, right? Not exactly.
It's one thing to fight violence and give users more control, but there's a fuzzy line between freedom of expression and offensive hate speech. Limiting one without impacting the other is practically impossible. And yet, social media, once a bastion of open discourse, faces mounting pressure to censor news feeds.
Back in May, Twitter joined Facebook, Microsoft and YouTube in agreeing to a European Commission (EC) "Code of Conduct" that calls for them to remove "illegal, online hate speech" from their sites within 24 hours of notification. It bears mentioning that, with rare exception, hate speech is also protected speech in the United States.
Related: The Harsh Lesson Everyone Can Learn From Justin Bieber Fleeing Instagram
If you wade through the pages and pages of bureaucratic rhetoric, the problem with this kind of online censorship becomes abundantly clear: the commission's definition of illegal hate speech is about as clear as mud … if you can find it, that is.
The EC news release cites a "framework decision on combating certain forms of racism and xenophobia by means of criminal law," which in turn cites a council framework decision and a joint action, each of which goes on and on. There's also the EU Colloquium on Fundamental Rights, the EU Internet Forum to save the public from terrorist exploitation of communication channels, the e-Commerce Directive on take-down procedures, and the Joint Statement following the Brussels terrorist attacks, which brings us full circle back to the Code of Conduct.
Finally, I was able to boil down the definition of criminal online content as that which "promotes incitement to violence and hateful conduct directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin."
And therein lies the rub. Who gets to decide what constitutes speech that "incites violence" and what exactly is "hateful conduct," anyway?
Related: 5 Ways to Cope With Online Haters
Subjective as that is, the bigger issue is that all this nonsense is intended to stop racism and xenophobia, which, in turn, is intended to stop the terrorists. In other words, these loony tunes in Brussels seem to think that their own hatred and fear of Muslims is the cause of radical Islamic terrorism, not the other way around. This is how they mean to combat the brutal attacks occurring all over Europe.
While Europe is definitely further along, there is a growing school of thought on this side of the Atlantic that we did this to ourselves and, if we can all just get along, the bad guys will leave us alone, Pollyannaish as that sounds. Of course, following the European Union (EU) down the slippery slope of censorship under some misguided belief that it will stop terrorism is ludicrous. Besides, I doubt if that's Twitter's motive.
The San Francisco-based company has long sought a solution to the vitriol-spewing trolls that scare away the Twitterati – celebrities with gazillions of followers that are the lifeblood of the site. A year and a half ago, then-CEO Dick Costolo lamented what he called an embarrassment, "I'm frankly ashamed of how poorly we've dealt with this issue," he wrote in an internal email. "It's absurd. There's no excuse for it."
That's not exactly true. There is an excuse for it: There is no obvious solution. Censorship is tricky business. It's not black and white. That's why it has taken Twitter so long to deal with the issue. Also, Twitter is just plain slow, but that's another story for another day.
Don't get me wrong. I like what the company rolled out last week. Keeping radical extremists from propagating disturbing content and using the site as a recruiting tool is long overdue. So is extending filters to all users that, until recently, had only been available to verified accounts. Now everyone can limit tweets to those they follow or use the "quality filter" to cut out some of the noise.
More important, I like that CEO Jack Dorsey is keenly aware that he's walking a fine line: "We are not and never will be a platform that shows people only part of what's happening or part of what's being said. We are the place for news and social commentary," he said on a recent earnings call.
On the other hand, "abuse is not part of civil discourse. It shuts down conversation. It prevents us from understanding each other," he said. "Freedom of expression means little if we allow voices to be silenced because of fear of harassment if they speak up. No one deserves to be the target of abuse online, and it has no place on Twitter."
There is that rub again -- finding that elusive line between freedom and censorship.