Facebook disabled 1.3B fake accounts in 6 months

Facebook struggles to get machines to stamp out hate speech

Facebook reveals how much content it has removed from its platform since October

Facebook has released internal figures on abusive content found and removed from the site for the first time, revealing the scale of malicious content on the social network. On Tuesday, the Menlo Park-based company shared numbers from its first Community Standards Enforcement Report that help illustrate Facebook's performance as of late.

These violations include graphic violence, adult nudity, terrorist propaganda, bullying, hate speech and fake accounts.

The report said Facebook has removed or put a warning screen for graphic violence in front of 3.4 million pieces of content in the first quarter, almost triple the 1.2 million a quarter earlier. In the first quarter alone, Facebook disabled roughly 583 million fake accounts, most of which were disabled within minutes of registration.

Facebook's vice president of product management, Guy Rosen, said in a blog post Tuesday about the newly-released report that nearly all of the 837 million spam posts Facebook took down in the first quarter of 2018 were found by Facebook before anyone had reported them. Facebook also took down over 837 million pieces of spam during 2018's first quarter.

Over the last 18 months Facebook has significantly increased the measures aimed at identifying inappropriate content and protect users, said Vice-President of product management, guy Rosen.

Instead of trying to determine how much offending material it didn't catch, Facebook provided an estimate on how frequently it believes users saw posts that violated its standards, including content that its screening system didn't detect.

Facebook says it has more than 2 billion users. The company estimates that out of every 10,000 pieces of content viewed on Facebook, 7-9 views were of content that violated its adult nudity and pornography standards. But users are still reporting the majority of hate-speech posts, or about 62 percent of them, before Facebook takes them down. It's detection algorithm is better at detecting some categories than others; and Facebook admits that hate speech still represents a detection problem.

Facebook says AI has played an increasing role in flagging this content.

For years, Facebook has relied on users to report offensive and threatening content. Facebook said that Zuckerberg "has no plans to travel to the United Kingdom", said Damian Collins, the leader of the UK's Digital, Culture, Media and Sport Committee, in a statement Tuesday. Just 38% of that content was "flagged" by the company's automated technology, requiring the remaining content to be discovered and flagged by humans. One potential flaw is that the data doesn't account for any bad content the company may have missed. To address the complaints, Facebook is adding more Burmese language reviewers to its content moderation efforts.

Chief Executive Officer Mark Zuckerberg faced several questions during his April congressional testimony about content removal.

Latest News