City
Epaper

Facebook removes 22.5mn hate speech content, 1.5bn fake accounts in Q2

By IANS | Updated: August 11, 2020 23:05 IST

San Francisco, Aug 11 Facebook on Tuesday said that it purged 22.5 million pieces of hate speech content ...

Open in App

San Francisco, Aug 11 Facebook on Tuesday said that it purged 22.5 million pieces of hate speech content in the second quarter (April-June) this year, increased from 9.6 million pieces of content in Q1, and its proactive detection rate for hate speech increased 6 points from 89 per cent to 95 per cent.

When it comes to fake accounts, Facebook said it removed 1,5 billion such accounts in Q2, down from 1.7 billion accounts in Q1.

The reason for the decline the social network gave was that when it blocks more attempts, there are fewer fake accounts for it to disable, "which has led to a general decline in accounts actioned since Q1 2019".

"We estimate that fake accounts represented approximately 5 per cent of our worldwide monthly active users (MAU) on Facebook during Q2," Guy Rosen, VP Integrity at Facebook, said in a blog post detailing the sixth edition of 'Community Standards Enforcement Report'.

On Instagram, the proactive detection rate for hate speech increased 39 points from 45 per cent to 84 per cent in the June quarter, said Facebook.

It resulted in action on 3.3 million pieces of hate speech content in Q2, up from 808,900 in Q1.

"These increases (on Instagram) were driven by expanding our proactive detection technologies in English and Spanish," Rosen said.

Another area where Facebook claimed it saw improvements was content related to terrorism.

"On Facebook, the amount of content we took action on increased from 6.3 million in Q1 to 8.7 million in Q2," Rosen informed.

"We saw increases in the amount of content we took action on connected to organized hate on Instagram and bullying and harassment on both Facebook and Instagram".

Facebook said that since it prioritised removing harmful content over measuring certain efforts during this time, it was unable to calculate the prevalence of violent and graphic content, and adult nudity and sexual activity.

"We want people to be confident that the numbers we report around harmful content are accurate, so we will undergo an independent, third-party audit, starting in 2021, to validate the numbers we publish in our Community Standards Enforcement Report," Rosen said.

Due to the COVID-19 pandemic, Facebook sent its content reviewers home in March and relied more heavily on the technology to review content.

The comlany said it has since brought many reviewers back online from home and, where it is safe, a smaller number into the office.

The company admitted that the number of appeals against the removal of accounts etc was much lower in Q2 "because we couldn't always offer them".

"We let people know about this and if they felt we made a mistake, we still gave people the option to tell us they disagreed with our decision".

( With inputs from IANS )

Tags: Vp integrity at facebookGuy RosenFacebookFacebook connectivity
Open in App

Related Stories

BusinessMukesh Ambani’s Reliance Industries Share Price Jump Over 2% as Facebook Acquires 30% Stake in AI Venture

TechnologyMeta Layoffs: Facebook-Owned Firm to Cut 600 Employees From AI Unit

TechnologyCyber Crime Alert: How WhatsApp and Facebook Can Protect Your Money and Data from Scammers

TechnologyFacebook Down? Meta-Owned Website Not Loading for Some Users, Displays Message 'HTTP ERROR 500'

TechnologyMeta’s New Subscription Model: A Tough Sell for India?

Technology Realted Stories

TechnologyIndian PSU oil companies secure 'historic' deal to import 2.2 MTPA LPG from US: Puri

TechnologyHyundai, Kia to join hands with parts suppliers for carbon emission reduction

TechnologyS. Korea's industrial production rises 5.8 pc in Q3 on semiconductors, automobiles

TechnologyMini car sales to remain below 100,000 units for 2nd year in S. Korea

TechnologySamsung to invest $309 billion over next 5 years