City
Epaper

Study identifies neurons in brains that react when they hear 'singing'

By ANI | Published: February 27, 2022 11:11 AM

Doesn't our mind feel at ease when we hear soothing songs, and at other times, feels like dancing when hearing a heart-pumping number? Does the brain react differently to different types of songs? In a first, MIT neuroscientists have identified a population of neurons in the human brain that lights up when we hear singing, but not other types of music.

Open in App

Doesn't our mind feel at ease when we hear soothing songs, and at other times, feels like dancing when hearing a heart-pumping number? Does the brain react differently to different types of songs? In a first, MIT neuroscientists have identified a population of neurons in the human brain that lights up when we hear singing, but not other types of music.

Their study was published in the journal, 'Current Biology'.

These neurons, found in the auditory cortex, appeared to respond to the specific combination of voice and music, but not to either regular speech or instrumental music. Exactly what they are doing is unknown and will require more work to uncover, the researchers said.

"The work provides evidence for relatively fine-grained segregation of function within the auditory cortex, in a way that aligns with an intuitive distinction within music," said Sam Norman-Haignere, a former MIT postdoc who is now an assistant professor of neuroscience at the University of Rochester Medical Center.

The work was built on a 2015 study in which the same research team used functional magnetic resonance imaging (fMRI) to identify a population of neurons in the brain's auditory cortex that responds specifically to music. In the new work, the researchers used recordings of electrical activity taken at the surface of the brain, which gave them much more precise information than fMRI.

"There's one population of neurons that responds to singing, and then very nearby is another population of neurons that responds broadly to lots of music. At the scale of fMRI, they're so close that you can't disentangle them, but with intracranial recordings, we get additional resolution, and that's what we believe allowed us to pick them apart," said Norman-Haignere.

Norman-Haignere is the lead author of the study. Josh McDermott, an associate professor of brain and cognitive sciences, and Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience, both members of MIT's McGovern Institute for Brain Research and Center for Brains, Minds and Machines (CBMM), are the senior authors of the study.

In their 2015 study, the researchers used fMRI to scan the brains of participants as they listened to a collection of 165 sounds, including different types of speech and music, as well as everyday sounds such as finger tapping or a dog barking. For that study, the researchers devised a novel method of analyzing the fMRI data, which allowed them to identify six neural populations with different response patterns, including the music-selective population and another population that responds selectively to speech.

In the new study, the researchers hoped to obtain higher-resolution data using a technique known as electrocorticography (ECoG), which allowed the electrical activity to be recorded by electrodes placed inside the skull. This offered a much more precise picture of electrical activity in the brain compared to fMRI, which measured blood flow in the brain as a proxy of neuron activity.

"With most of the methods in human cognitive neuroscience, you can't see the neural representations," Kanwisher said. "Most of the kind of data we can collect can tell us that here's a piece of brain that does something, but that's pretty limited. We want to know what's represented in there."

Electrocorticography cannot be typically be performed in humans because it is an invasive procedure, but it is often used to monitor patients with epilepsy who are about to undergo surgery to treat their seizures. Patients are monitored over several days so that doctors can determine where their seizures are originating before operating. During that time, if patients agree, they can participate in studies that involve measuring their brain activity while performing certain tasks. For this study, the MIT team was able to gather data from 15 participants over several years.

For those participants, the researchers played the same set of 165 sounds that they used in the earlier fMRI study. The location of each patient's electrodes was determined by their surgeons, so some did not pick up any responses to auditory input, but many did. Using a novel statistical analysis that they developed, the researchers were able to infer the types of neural populations that produced the data that were recorded by each electrode.

"When we applied this method to this data set, this neural response pattern popped out that only responded to singing," Norman-Haignere said. "This was a finding we really didn't expect, so it very much justifies the whole point of the approach, which is to reveal potentially novel things you might not think to look for."

That song-specific population of neurons had very weak responses to either speech or instrumental music, and therefore is distinct from the music- and speech-selective populations identified in their 2015 study.

In the second part of their study, the researchers devised a mathematical method to combine the data from the intracranial recordings with the fMRI data from their 2015 study. Because fMRI can cover a much larger portion of the brain, this allowed them to determine more precisely the locations of the neural populations that respond to singing.

"This way of combining ECoG and fMRI is a significant methodological advance," McDermott said. "A lot of people have been doing ECoG over the past 10 or 15 years, but it's always been limited by this issue of the sparsity of the recordings. Sam is really the first person who figured out how to combine the improved resolution of the electrode recordings with fMRI data to get better localization of the overall responses."

The song-specific hotspot that they found is located at the top of the temporal lobe, near regions that are selective for language and music. That location suggested that the song-specific population may be responding to features such as the perceived pitch, or the interaction between words and perceived pitch, before sending information to other parts of the brain for further processing, the researchers said.

( With inputs from ANI )

Disclaimer: This post has been auto-published from an agency feed without any modifications to the text and has not been reviewed by an editor

Tags: University of rochester medical centerJosh mcdermottNancy kanwisherMIT
Open in App

Related Stories

MaharashtraPune: 19-year-old MIT student found dead in hostel; suicide suspected

AurangabadChancellor nominates 9 members to Academic Council

InternationalYour canine friends can now pave way for cancer cure in humans

AurangabadBamu to be ‘role model’ in NEP implementation for other varsities

International"Chennai, you have got my heart:" US Ambassador to India Garcetti on South Indian food

Technology Realted Stories

TechnologyGenAI emerges as key theme in firms’ discussions in Q1 this year: Report

TechnologyGlobal cybercrime cost insreased 12x faster than total cybersecurity spending: Report

TechnologySoon, take a ride on Uber bus in Delhi

TechnologyPE fund Arpwood Partners infuses Rs 680 crore in low-cost home financer Sitara

TechnologyAnanant Systems joins India Electronics and Semiconductor Association to drive innovation