City
Epaper

Research reveals brain processes speech, its echo separately

By ANI | Updated: February 16, 2024 14:10 IST

Washington DC [US], February 16 : Echoes can make speech difficult to understand, and getting rid of echoes in ...

Open in App

Washington DC [US], February 16 : Echoes can make speech difficult to understand, and getting rid of echoes in an audio recording is a very difficult engineering task. According to a study published in the open-access journal PLOS Biology by Jiaxin Gao of Zhejiang University in China and colleagues, the human brain appears to successfully handle the challenge by splitting the sound into direct speech and its echo.

The audio signals in online meetings and auditoriums that are not properly designed often have an echo lagging at least 100 milliseconds from the original speech. These echoes heavily distort speech, interfering with slowly varying sound features most important for understanding conversations, yet people still reliably understand echoic speech. To better understand how the brain enables this, the authors used magnetoencephalography (MEG) to record neural activity while human participants listened to a story with and without an echo. They compared the neural signals to two computational models: one simulating the brain adapting to the echo, and another simulating the brain separating the echo from the original speech.

Participants understood the story with over 95% accuracy, regardless of echo. The researchers observed that cortical activity tracks energy changes related to direct speech, despite the strong interference of the echo. Simulating neural adaptation only partially captured the brain response they observedneural activity was better explained by a model that split original speech and its echo into separate processing streams. This remained true even when participants were told to direct their attention toward a silent film and ignore the story, suggesting that top-down attention isn't required to mentally separate direct speech and its echo. The researchers state that auditory stream segregation may be important both for singling out a specific speaker in a crowded environment, and for clearly understanding an individual speaker in a reverberant space.

The authors add, "Echoes strongly distort the sound features of speech and create a challenge for automatic speech recognition. The human brain, however, can segregate speech from its echo and achieve reliable recognition of echoic speech."

Disclaimer: This post has been auto-published from an agency feed without any modifications to the text and has not been reviewed by an editor

Open in App

Related Stories

Other SportsMen’s Asia Cup to be held from September 9-28 in the UAE, confirms ACC President Naqvi

EntertainmentWhen Kajol opened up to Twinkle Khanna about aging anxiety

BusinessRupee remains range-bound amid dollar strength; gold loses as tariff war wanes

CricketBen Stokes Scores 141 as England Post 669, Take 311-Run Lead Over India on Day 4 of ENG vs IND 4th Test at Old Trafford

NationalBihar: Chirag Paswan's remarks on law and order create unease in NDA camp

Technology Realted Stories

TechnologyNSDL IPO: SBI, NSE and others to receive whopping returns

TechnologyIndian medical devices export surged by 88pc in last six fiscal years: Anupriya Patel

TechnologyPiyush Goyal lauds India-UK deal as most 'important' one, slams UPA-era pacts

TechnologyKotak Mahindra Bank's net profit falls 39 pc YoY to Rs 4,472 crore in Q1

Technology297 new day care cancer centres approved for FY26: Centre