City
Epaper

New algorithms to spot online trolls: Study

By IANS | Published: January 09, 2020 1:02 PM

Researchers, including two of Indian-origin, have demonstrated that machine-learning algorithms can monitor online social media conversations as they evolve, which could one day lead to an effective and automated way to spot online trolling.

Open in App

Prevention of online harassment requires rapid detection of offensive, harassing, and negative social media posts, which in turn requires monitoring online interactions.

Current methods to obtain such social media data are either fully automated and not interpretable or rely on a static set of keywords, which can quickly become outdated. Neither method is very effective, according to Indian-origin researcher Maya Srikanth, from California Institute of Technology (Caltech) in the US.

"It isn't scalable to have humans try to do this work by hand, and those humans are potentially biased," Srikanth said.

"On the other hand, keyword searching suffers from the speed at which online conversations evolve. New terms crop up and old terms change meaning, so a keyword that was used sincerely one day might be meant sarcastically the next," she added.

According to the study, the research team used a GloVe (Global Vectors for Word Representation) model to discover new and relevant keywords.

GloVe is a word-embedding model, meaning that it represents words in a vector space, where the "distance" between two words is a measure of their linguistic or semantic similarity.

Starting with one keyword, this model can be used to find others that are closely related to that word to reveal clusters of relevant terms that are actually in use.

For example, searching Twitter for uses of "MeToo" in conversations yielded clusters of related hashtags like "SupportSurvivors," "ImWithHer," and "NotSilent."

This approach gives researchers a dynamic and ever-evolving keyword set to search.

The project was a proof-of-concept aimed at one day giving social media platforms a more powerful tool to spot online harassment.

"The field of AI research is becoming more inclusive, but there are always people who resist change," said researcher Anima Anandkumar, who in 2018 found herself the target of harassment and threats online because of her successful effort to switch to an acronym without potentially offensive connotations.

"It was an eye-opening experience about just how ugly trolling can get. Hopefully, the tools we're developing now will help fight all kinds of harassment in the future," she added.

The study was presented on December 14 last year at the AI for Social Good workshop at the 2019 Conference on Neural Information Processing Systems in Vancouver, Canada.

( With inputs from IANS )

Tags: Anima AnandkumarCalifornia Institute Of TechnologyCaltech
Open in App

Related Stories

HealthGood News! Scientists Develop All-in-One Vaccine Effective Against Current and Future Coronavirus Variants

AurangabadStudents of Dr Dubey’s Institute shine in JEE-Main & Advanced, NEET

Technology1st-ever human synthetic model embryos developed without eggs, sperm

NationalKashmiri entrepreneurs introduce radio frequency transmission system for emergency vehicles

BusinessSLS Hyderabad's Centre for Specialisations: Choose from a wide range of electives and specializations to pursue your interests and goals in law

टेकमेनिया Realted Stories

TechnologyHumans not at risk of deadly chronic wasting disease: Study

TechnologyAWS, Microsoft Azure, Google Cloud now dominate 66 per cent of global Cloud spending

TechnologyWhy congenital heart disease remains a health concern in India

TechnologyDigital Competition Bill: Strengthen existing regulatory bodies like CCI, NCLAT, says Nasscom

TechnologyLack of self-sustained economic activity driving migration from some states: Zoho CEO