City
Epaper

IIT Madras, UK researchers develop technology to make AI fairer

By IANS | Updated: January 29, 2020 16:45 IST

Researchers from Indian Institute of Technology Madras (IIT-Madras) and Queen's University Belfast in UK, have developed an innovative new algorithm to make Artificial Intelligence (AI) fairer and less biased when processing data.

Open in App

Companies often use AI technologies to sift through huge amounts of data in situations such as an oversubscribed job vacancy or in policing when there is a large volume of CCTV data linked to a crime.

"AI techniques for exploratory data analysis, known as 'clustering algorithms', are often criticised as being biased in terms of 'sensitive attributes' such as race, gender, age, religion and country of origin," said study researcher Deepak Padmanabhan from Queen's University Belfast.

It has been reported that white-sounding names received 50 per cent more call-backs than those with black-sounding names.

Studies also suggest that call-back rates tend to fall substantially for workers in their 40s and beyond.

When a company is faced with a process that involves lots of data, it is impossible to manually sift through this.

Clustering is a common process to use in processes such as recruitment where there are thousands of applications submitted.

While this may cut back on time in terms of sifting through large numbers of applications, there is a big catch. It is often observed that this clustering process exacerbates workplace discrimination by producing clusters that are highly skewed.

Over the last few years 'fair clustering' techniques have been developed and these prevent bias in a single chosen attribute, such as gender.

The research team has now developed a method that, for the first time, can achieve fairness in many attributes.

"Fairness in AI techniques is of significance in developing countries such as India. These countries experience drastic social and economic disparities and these are reflected in the data," said Savitha Abraham from IIT Madras.

"Employing AI techniques directly on raw data results in biased insights, which influence public policy and this could amplify existing disparities. The uptake of fairer AI methods is critical, especially in the public sector, when it comes to such scenarios," Abraham added.

Our fair clustering algorithm, called 'FairKM,' can be invoked with any number of specified sensitive attributes, leading to a much fairer process, researchers said.

In a way, FairKM takes a significant step towards algorithms assuming the role of ensuring fairness in shortlisting, especially in terms of human resources.

FairKM can be applied across a number of data scenarios where AI is being used to aid decision making, such as pro-active policing for crime prevention and detection of suspicious activities.

The research work is scheduled to presented in Copenhagen in April 2020 at the EDBT 2020 conference in Denmark.

( With inputs from IANS )

Tags: cctvIITIans
Open in App

Related Stories

NationalKanpur Scooter Blast CCTV: Footage Captures Moment of Explosion in Mishri Bazaar, Injuring 8

MaharashtraLeopard Spotted in Satara: CCTV Captures Big Cat Trying to Attack Dog in Talgaon; Panic Among Residents

MumbaiIIT Bombay Student Dies by Suicide After Jumping from Hostel Terrace in Mumbai

MumbaiMumbai Coastal Road Now Monitored with 236 Smart CCTV Cameras

MumbaiIIT Bombay Security Breach: Mumbai Crime Branch Uncovers New Details About Accused

टेकमेनिया Realted Stories

TechnologyIndia enters long-awaited earnings upgrade cycle, Nifty poised for 29,000 level

TechnologyNasscom Foundation, IBM to equip 87,000 marginalised youth in India with digital skills

TechnologyDelayed payments to MSMEs record steady decline on robust policy measures

TechnologyWe are dreaming big, doing bigger and delivering best: PM Modi

Technology16 pc IT firms reach accelerated innovation stage: Report