City
Epaper

IIT Madras, UK researchers develop technology to make AI fairer

By IANS | Updated: January 29, 2020 16:45 IST

Researchers from Indian Institute of Technology Madras (IIT-Madras) and Queen's University Belfast in UK, have developed an innovative new algorithm to make Artificial Intelligence (AI) fairer and less biased when processing data.

Open in App

Companies often use AI technologies to sift through huge amounts of data in situations such as an oversubscribed job vacancy or in policing when there is a large volume of CCTV data linked to a crime.

"AI techniques for exploratory data analysis, known as 'clustering algorithms', are often criticised as being biased in terms of 'sensitive attributes' such as race, gender, age, religion and country of origin," said study researcher Deepak Padmanabhan from Queen's University Belfast.

It has been reported that white-sounding names received 50 per cent more call-backs than those with black-sounding names.

Studies also suggest that call-back rates tend to fall substantially for workers in their 40s and beyond.

When a company is faced with a process that involves lots of data, it is impossible to manually sift through this.

Clustering is a common process to use in processes such as recruitment where there are thousands of applications submitted.

While this may cut back on time in terms of sifting through large numbers of applications, there is a big catch. It is often observed that this clustering process exacerbates workplace discrimination by producing clusters that are highly skewed.

Over the last few years 'fair clustering' techniques have been developed and these prevent bias in a single chosen attribute, such as gender.

The research team has now developed a method that, for the first time, can achieve fairness in many attributes.

"Fairness in AI techniques is of significance in developing countries such as India. These countries experience drastic social and economic disparities and these are reflected in the data," said Savitha Abraham from IIT Madras.

"Employing AI techniques directly on raw data results in biased insights, which influence public policy and this could amplify existing disparities. The uptake of fairer AI methods is critical, especially in the public sector, when it comes to such scenarios," Abraham added.

Our fair clustering algorithm, called 'FairKM,' can be invoked with any number of specified sensitive attributes, leading to a much fairer process, researchers said.

In a way, FairKM takes a significant step towards algorithms assuming the role of ensuring fairness in shortlisting, especially in terms of human resources.

FairKM can be applied across a number of data scenarios where AI is being used to aid decision making, such as pro-active policing for crime prevention and detection of suspicious activities.

The research work is scheduled to presented in Copenhagen in April 2020 at the EDBT 2020 conference in Denmark.

( With inputs from IANS )

Tags: cctvIITIans
Open in App

Related Stories

Social Viral"My Boss Is a Lunatic": Indian Employee Shares Shocking Tale of Toxic Workplace

NationalIIT JAM Result 2025 Released at jam2025.iitd.ac.in - Know How to Check

NationalAmritsar Blast: Two Bike-Borne Attackers Hurl Explosive at Thakurdwara Temple in Punjab; CCTV Video Emerges

NationalGhaziabad Viral CCTV Video: Car Runs Over Child Playing in SG Grand Society in UP, Woman Flees the Scene After Accident

InternationalCalifornia Plane Crash: US Military Fighter Jet EA-18G Growler Crashes Into San Diego Harbor; Video Emerges

टेकमेनिया Realted Stories

TechnologyIndian Institute of Creative Technology to boost AVGC-XR sector: Ashwini Vaishnaw

TechnologyKotak Mahindra Bank’s net profit declines 7.5 pc in Q4 FY25, NII up 9 pc

TechnologyUnhealthy lifestyles may be ageing your heart too fast

TechnologyMixed reality display market to grow 6 pc globally in 2025: Report

TechnologyIIT Madras launches 2 indigenously developed silicon photonics products