City
Epaper

Scientists use AI to map how the brain understands sentences

By ANI | Published: March 24, 2021 10:29 PM

A new research involving neuroimaging and A.I., at the University of Rochester Medical Center, describes the complex network within the brain that comprehends the meaning of a spoken sentence.

Open in App

A new research involving neuroimaging and A.I., at the University of Rochester Medical Center, describes the complex network within the brain that comprehends the meng of a spoken sentence.

Have you ever wondered why you are able to hear a sentence and understand its meng -- given that the same words in a different order would have an entirely different meng?

"It has been unclear whether the integration of this meng is represented in a particular site in the brain, such as the anterior temporal lobes, or reflects a more network-level operation that engages multiple brain regions," said Andrew Anderson, Ph.D., research assistant professor in the University of Rochester Del Monte Institute for Neuroscience and lead author on of the study which was published in the Journal of Neuroscience.

"The meng of a sentence is more than the sum of its parts. Take a very simple example -- 'the car ran over the cat' and 'the cat ran over the car' -- each sentence has exactly the same words, but those words have a totally different meng when reordered."

The study is an example of how the application of artificial neural networks, or A.I., are enabling researchers to unlock the extremely complex signaling in the brain that underlies functions such as processing language.

The researchers gather brain activity data from study participants who read sentences while undergoing fMRI. These scans showed activity in the brain spanning across a network of different regions -- anterior and posterior temporal lobes, inferior parietal cortex, and inferior frontal cortex.

Using the computational model InferSent -- an A.I. model developed by Facebook trained to produce unified semantic representations of sentences -- the researchers were able to predict patterns of fMRI activity reflecting the encoding of sentence meng across those brain regions.

"It's the first time that we've applied this model to predict brain activity within these regions, and that provides new evidence that contextualized semantic representations are encoded throughout a distributed language network, rather than at a single site in the brain."

Anderson and his team believe the findings could be helpful in understanding clinical conditions. "We're deploying similar methods to try to understand how language comprehension breaks down in early Alzheimer's disease.

We are also interested in moving the models forward to predict brain activity elicited as language is produced. The current study had people read sentences, in the future we're interested in moving forward to predict brain activity as people might speak sentences."

( With inputs from ANI )

Disclaimer: This post has been auto-published from an agency feed without any modifications to the text and has not been reviewed by an editor

Tags: University of rochester medical centerUniversity of rochester del monte institute for neuroscienceAndrew anderson
Open in App

Related Stories

HealthStudy identifies potential targets for treatment of neuropsychiatric diseases in teens

TechnologyStudy reveals possible target for treating neuropsychiatric disorders in teens

HealthChild victims of violence experience long-term psychological effects: Study

HealthStudy finds hay fever associated with asthma in children

HealthResearchers discover new type of technology to detect ovarian cancer

Technology Realted Stories

Technology27 Indian startups secure over $222 million in funding this week

TechnologyHealthify cuts around 27 per cent workforce in restructuring exercise

Technology'A lot has changed, from tech to my hair', Pichai on completing two decades in Google

TechnologyIndian cyber agency finds multiple bugs in Cisco products

TechnologyGenerative AI to transform legal tech market with automation