City
Epaper

AI models fall short in predicting social interactions, shows research

By IANS | Updated: April 24, 2025 13:47 IST

New Delhi, April 24 Artificial intelligence (AI) systems failed at understanding social dynamics and context necessary for interacting ...

Open in App

New Delhi, April 24 Artificial intelligence (AI) systems failed at understanding social dynamics and context necessary for interacting with people and the problem may be rooted in the infrastructure of AI models, Johns Hopkins University research said on Thursday.

Humans, it turned out, are better than current AI models at describing and interpreting social interactions in a moving scene — a skill necessary for self-driving cars, assistive robots, and other technologies that rely on AI systems to navigate the real world, said the researchers from the top US university.

“AI for a self-driving car, for example, would need to recognise the intentions, goals, and actions of human drivers and pedestrians. You would want it to know which way a pedestrian is about to start walking, or whether two people are in conversation versus about to cross the street,” said lead author Leyla Isik, an assistant professor of cognitive science at Johns Hopkins University.

“Any time you want an AI to interact with humans, you want it to be able to recognise what people are doing. I think this sheds light on the fact that these systems can’t right now,” Isik added.

To determine how AI models measure up compared to human perception, the researchers asked human participants to watch three-second videoclips and rate features important for understanding social interactions on a scale of one to five.

The clips included people either interacting with one another, performing side-by-side activities, or conducting independent activities on their own.

The researchers then asked more than 350 AI language, video, and image models to predict how humans would judge the videos and how their brains would respond to watching. For large language models, the researchers had the AIs evaluate short, human-written captions.

The results provided a sharp contrast to AI’s success in reading still images.

“It’s not enough to just see an image and recognise objects and faces. That was the first step, which took us a long way in AI. But real life isn’t static. We need AI to understand the story that is unfolding in a scene. Understanding the relationships, context, and dynamics of social interactions is the next step, and this research suggests there might be a blind spot in AI model development,” Kathy Garcia, a doctoral student working in Isik’s lab, explained.

Researchers believe this is because AI neural networks were inspired by the infrastructure of the part of the brain that processes static images, which is different from the area of the brain that processes dynamic social scenes.

Disclaimer: This post has been auto-published from an agency feed without any modifications to the text and has not been reviewed by an editor

Open in App

Related Stories

NationalOver 1.28 lakh raids conducted and more than 59,000 cylinders seized across the country since March 2026

NationalWest Bengal: Clash erupts between BJP, TMC workers in Durgapur after Amit Shah's roadshow; two injured

CricketIPL 2026: Debutants Praful Hinge, Sakib Hussain Star as SRH End RR’s Unbeaten Run with 57-Run Win (VIDEO)

EntertainmentDhurandhar 2 Box Office Collection Day 26: Ranveer Singh’s Film Mints Over Rs 1,088 Crore in India - Check Day-Wise Earnings Report

NationalI-PAC director’s arrest shakes level playing field: Abhishek Banerjee​

Technology Realted Stories

TechnologyIMD forecasts below normal monsoon in 2026, rainfall to be 95-90 pc of average

TechnologyGovt launches Rs 10,000 crore ‘Startup India FoF 2.0’ to boost innovation ecosystem

TechnologyNoise Master Buds 2 Review: Real Bose Magic on a Budget?

TechnologyNashik incident ‘gravely concerning’, strict action to follow: N Chandrasekaran

TechnologyChina’s trade incentives for Taiwan risk politicising trade, national security