City
Epaper

AI models fall short in predicting social interactions, shows research

By IANS | Updated: April 24, 2025 13:47 IST

New Delhi, April 24 Artificial intelligence (AI) systems failed at understanding social dynamics and context necessary for interacting ...

Open in App

New Delhi, April 24 Artificial intelligence (AI) systems failed at understanding social dynamics and context necessary for interacting with people and the problem may be rooted in the infrastructure of AI models, Johns Hopkins University research said on Thursday.

Humans, it turned out, are better than current AI models at describing and interpreting social interactions in a moving scene — a skill necessary for self-driving cars, assistive robots, and other technologies that rely on AI systems to navigate the real world, said the researchers from the top US university.

“AI for a self-driving car, for example, would need to recognise the intentions, goals, and actions of human drivers and pedestrians. You would want it to know which way a pedestrian is about to start walking, or whether two people are in conversation versus about to cross the street,” said lead author Leyla Isik, an assistant professor of cognitive science at Johns Hopkins University.

“Any time you want an AI to interact with humans, you want it to be able to recognise what people are doing. I think this sheds light on the fact that these systems can’t right now,” Isik added.

To determine how AI models measure up compared to human perception, the researchers asked human participants to watch three-second videoclips and rate features important for understanding social interactions on a scale of one to five.

The clips included people either interacting with one another, performing side-by-side activities, or conducting independent activities on their own.

The researchers then asked more than 350 AI language, video, and image models to predict how humans would judge the videos and how their brains would respond to watching. For large language models, the researchers had the AIs evaluate short, human-written captions.

The results provided a sharp contrast to AI’s success in reading still images.

“It’s not enough to just see an image and recognise objects and faces. That was the first step, which took us a long way in AI. But real life isn’t static. We need AI to understand the story that is unfolding in a scene. Understanding the relationships, context, and dynamics of social interactions is the next step, and this research suggests there might be a blind spot in AI model development,” Kathy Garcia, a doctoral student working in Isik’s lab, explained.

Researchers believe this is because AI neural networks were inspired by the infrastructure of the part of the brain that processes static images, which is different from the area of the brain that processes dynamic social scenes.

Disclaimer: This post has been auto-published from an agency feed without any modifications to the text and has not been reviewed by an editor

Open in App

Related Stories

Other SportsKIYG 2025: Harshita Jakhar's double gold powers Rajasthan to top of charts

BusinessUK says India FTA to slash tariffs on key products, including whisky, cosmetics, medical devices

NashikNashik to Conduct Mock Drill Exercise on May 7; Full List of Locations and What Citizens Should Know

AurangabadOyster Intel School upholds legacy of 100 pc result

EntertainmentShivangi Joshi finds ‘space to breathe and heal’ in the embrace of Bali’s nature

Health Realted Stories

HealthShingles vaccine can protect heart health up to 8 years: Study

HealthStudy shows HIV prevalence rising in older adults, but prevention focusses youth

HealthJharkhand govt to withdraw order removing RIMS Director, HC disposes of petition

HealthElderly dementia patients in S. Korea hold assets worth 6.4 pc of GDP: Report

HealthHere’s how Ayurveda tourism is redefining wellness travel