What Facebook and Twitter Are Doing to Remove Bad Content Facebook deleted 583 million fake accounts during the first quarter.
Opinions expressed by Entrepreneur contributors are their own.
Morning Brew is a witty (and free) email newsletter delivering the latest news from Wall St. to Silicon Valley, daily. Upgrade your morning routine here.
Bad content? Like your neighbor Derek's trick shot compilation? Nope, like actually inappropriate content: graphic violence, terrorist propaganda, nudity, hate speech and fake accounts -- all of which violate Facebook's Community Standards.
And for the very first time, Facebook spilled the beans on exactly how much content it removes from its platform.
So ... what did we learn?
- Facebook has a spam problem: The vast majority of removed content (and we mean vast ... like 97 percent) was spam. And Facebook still estimates 3-4 percent of its 2.2 billion monthly users are not real people at all.
- AI does a better job of flagging certain types of bad content than others: It flagged almost 100 percent of spam and 96 percent of adult nudity before any human found out. But it's got a much worse track record when it comes to hate speech. Only 38 percent was flagged by AI before a user complained ... which speaks to the tricky nuances of human language.
But bad content isn't just a Facebook problem.
Twitter is also trying a new approach to keep its trolls under the bridge and away from your feed. That includes docking tweets from people who:
- Haven't confirmed their emails (sketchy).
- Sign up for multiple accounts at the same time (pretty sketchy).
- Spend a lot of time tweeting at people who don't follow them (ultra sketchy).
Is it working? It's a start. Twitter says the new approach has resulted in a "4 percent drop in abuse reports from search and 8% fewer abuse reports from conversations."