City
Epaper

This is how Facebook detects harmful content with AI systems

By IANS | Published: August 13, 2020 9:41 AM

New Delhi, Aug 13 For effective content moderation, Facebook is relying on three aspects of technology to transform ...

Open in App

New Delhi, Aug 13 For effective content moderation, Facebook is relying on three aspects of technology to transform its content review process across its family of apps.

The first aspect is called 'Proactive Detection' where Artificial intelligence (AI) can detect violations across a wide variety of areas without relying on users to report content to Facebook, often with greater accuracy than reports from users.

"This helps us detect harmful content and prevent it from being seen by hundreds or thousands of people," the company said in a statement.

'Automation' is the second aspect where AI systems have automated decisions for certain areas where content is highly likely to be violating.

"Automation also makes it easier to take action on identical reports, so our teams don't have to spend time reviewing the same things multiple times. These systems have become even more important during the Covid-19 pandemic with a largely remote content review workforce," said Jeff King, Director Product Management, Integrity at Facebook.

The third aspect is 'Prioritisation'.

Instead of simply looking at reported content in chronological order, AI prioritises the most critical content to be reviewed, whether it was reported to Facebook or detected by its proactive systems.

"This ranking system prioritizes the content that is most harmful to users based on multiple factors such as virality, severity of harm and likelihood of violation," Kind added.

However, Facebook admitted there are still areas where it's critical for people to review the content.

"For example, discerning if someone is the target of bullying can be extremely nuanced and contextual. In addition, AI relies on a large amount of training data from reviews done by our teams in order to identify meaningful patterns of behaviour and find potentially violating content".

For reviewing violations like spam, Facebook said it is going to use its automated systems first to review more content across all types of violations.

( With inputs from IANS )

Tags: Jeff KingFacebookFacebook connectivity
Open in App

Related Stories

TechnologyFacebook, Instagram Down: Meta-Owned Apps Not Loading for Users

InternationalHungary President Katalin Novak Resigns for Pardoning Accomplice in Child Abuse Case (Watch Video)

TechnologyFacebook Turns 20, Instagram Sends Heartfelt Message to Zuckerberg: 'Love You Dad'

BusinessMeta Soars with Record $196 Billion Gain, Declares First Dividend

TechnologyMeta's Threads Breaks Into Top 10 as Downloads Tripled in Dec, X (Twitter) Slips to No. 36

Technology Realted Stories

TechnologyApple announces new accessibility features for iPhone, iPad

TechnologyCentre takes up surge in online fake reviews with e-com giants, issues draft quality control order

TechnologyDoT selects 144 participants for its 'Sangam: Digital Twin' initiative

TechnologyMakeMyTrip logs 23 per cent growth in gross bookings, profit at $172 million in Q4

TechnologyExercise can rejuvenate brain, delay cognitive decline: Study