Facebook estimates hate speech seen in 1 out of 1000 views on its platform

Facebook App

Facebook for the very first time on Thursday revealed numbers on the incidence of hate speech on its stage, stating that from every 10,000 content viewpoints in the third quarter, 10 to 11 contained hate speech.

The world’s leading social media company, under scrutiny over its policing of abuses, especially around November’s US presidential election, published the estimate in its own quarterly content Pot report.

On a call with colleagues, Facebook’s head of security and integrity Guy Rosen said that from March the 1st on the November the 3rd election, the company eliminated more than 265,000 pieces of content from Facebook and Instagram from the US for separating its own voter hindrance policies.

Facebook also said it took action on 22.1 million pieces of hate speech articles in the next quarter, roughly 95% of that was identified. It required actions on 22.5 million from the previous quarter.

Related Article:
Google backs Reliance's Jio Platforms with $4.5 billion India investment

The company defines ‘taking action’ as eliminating content, covering it with a warning, disabling accounts, or escalating it into outside agencies.

Facebook’s photo-sharing site Instagram took action on 6.5 million pieces of hate speech content, up from 3.2 million in Q2. About 95% of the was proactively identified, a 10% gain from the previous quarter.

This summer, civil rights groups organized a widespread Facebook marketing boycott to attempt to pressure social media companies to act against hate speech.

In October, Facebook stated it was upgrading its hate speech coverage to ban any content that denies or distorts the Holocaust, a turnaround from public comments Facebook’s CEO Mark Zuckerberg had made about what ought to be allowed on the platform.

Facebook said it took action on 19.2 million pieces of violent and picture content in the next quarter, up from 15 million in the second. On Instagram, it required action on 4.1 million bits of violent and graphic content, up from 3.1 million in the next quarter.

Related Article:
UK watchdog: Instagram agrees curbs on paid influencers

Rosen said the company expected to get an independent audit of its own content enforcement amounts “over the course of 2021.”

Before this week, Zuckerberg and Twitter CEO Jack Dorsey were grilled by Congress on their companies’ content moderation practices, from Republican allegations of political bias to decisions about violent speech.

The company has also been criticized in recent months for allowing rapidly-growing Facebook groups sharing false election claims and violent rhetoric to gain traction.

In a blog post, Facebook said the COVID-19 pandemic continued to disrupt its content-review workforce, though it said some enforcement metrics were returning to pre-pandemic levels.

An open letter from more than 200 Facebook content moderators published on Wednesday accused the company of forcing these workers back to the office and ‘needlessly risking’ lives during the pandemic.

“The centers meet or surpass the guidance on a secure workspace,” said Facebook’s Rosen on Thursday’s call.

Related Article:
US judge halts Trump's order to remove WeChat from app stores

The team at Platform Executive hope you have enjoyed this news article. Initial reporting via our official content partners at Thomson Reuters. Reporting by Jonathan Landay. Editing by Steve Orlofsky.

Stay on top of the latest developments across the platform economy and gain access to our problem-solving tools, proprietary databases and content sets by becoming a premium member. Subscription plans start at under $7 per month.

Share This Post