Facebook reports spike in takedowns of hate speech and terrorism

Facebook App

Facebook has reported a sharp increase in the number of posts it removed for promoting hate speech, or violence across its apps, which it attributed to improvements for automatically identifying text and images. 

KEY POINTS

  • Facebook reports a sharp rise in the identification and removal of hate-related content across its apps
  • The company removed about 4.7 million posts connected to hate-based organisations in the first quarter of 2020, up massively from the previous quarter total of 1.6 million
  • Additionally, Facebook deleted 9.6 million posts that it deemed containing hate speech
  • Mark Zuckerberg noted that labelling some 50 million content items appeared to work

The social media giant removed approximately 4.7 million posts connected to hate organisations on its flagship app in the first quarter, up from 1.6 million in the 2019 fourth quarter. It also deleted 9.6 million posts containing hate speech, compared with 5.7 million in the prior period.

That marks a six-fold increase in hateful content removals since the second half of 2017, the earliest period for which Facebook discloses data.

The company also said it put warning labels on about 50 million pieces of content related to coronavirus, after taking the unusually aggressive step of banning harmful misinformation about the new coronavirus at the start of the pandemic.

Related Article:
China's new tech export controls could give Beijing a say in TikTok sale

“We have a good sense that these warning labels work. Ninety-five percent of the time that someone sees content with a label, they don’t click through to view that content,” Founder and CEO Mark Zuckerberg told reporters on a press call.

Facebook released the data as part of its fifth Community Standards Enforcement Report, which it introduced in 2018 along with more stringent decorum rules in response to a backlash over its lax approach to policing content on its platforms. These include Facebook’s Messenger and WhatApp mobile apps.

It expanded the report last year to include information about how it enforces rules on photo-sharing app Instagram and said on Tuesday it would begin releasing the data on a quarterly basis.

In a blog post announcing the report, Facebook highlighted improvements to its “proactive detection technology,” which uses artificial intelligence to detect violating content as it is posted and remove it before other users can see it.

“We’re now able to detect text embedded in images and videos in order to understand its full context, and we’ve built media matching technology to find content that’s identical or near-identical to photos, videos, text and even audio that we’ve already removed,” the statement said.

Related Article:
Aussie lawmakers expected to pass amendments to Facebook and Google law

Improvements to that technology also enabled the proactive removal of more drug-related and sexually exploitative content, the company said.

With fewer moderators available during the pandemic, Facebook has relied more on automated tools to police content as conspiracy theories about the coronavirus have spread online.

On the call, Mr Zuckerberg said contractors in some parts of the world were starting to return to their offices, but cautioned that the coronavirus adjustments were likely to have a heavier impact on the data in the second quarter.

Cindy Otis, a disinformation researcher and a former analyst at the CIA, noted that coronavirus-related abuse spiked in April, after the period covered in the report.

She urged Facebook to disclose how quickly it removes posts, a key indicator of the effectiveness of its systems, as it often appears to act only after content has gone viral and spread to other platforms.

“The pandemic has been the largest event in disinformation and misinformation history,” she said, “and that does not appear to show in their numbers they provide.”

Via our content partners at Reuters. Reporting by Katie Paul. Editing by Dan Grebler and Richard Chang.

Related Article:
Big Tech takes aim at Trump's H1B visa decision
Share this article