Neural realignment of spatially separated sound components

Nelli H. Salminen, Marko Takanen, Olli Santala, Paavo Alku, Ville Pulkki

Research output: Contribution to journalArticleScientificpeer-review

2 Citations (Scopus)
228 Downloads (Pure)

Abstract

Natural auditory scenes often consist of several sound sources overlapping in time, but separated in space. Yet, location is not fully exploited in auditory grouping: spatially separated sounds can get perceptually fused into a single auditory object and this leads to difficulties in the identification and localization of concurrent sounds. Here, the brain mechanisms responsible for grouping across spatial locations were explored in magnetoencephalography (MEG) recordings. The results show that the cortical representation of a vowel spatially separated into two locations reflects the perceived location of the speech sound rather than the physical locations of the individual components. In other words, the auditory scene is neurally rearranged to bring components into spatial alignment when they were deemed to belong to the same object. This renders the original spatial information unavailable at the level of the auditory cortex and may contribute to difficulties in concurrent sound segregation.
Original languageEnglish
Pages (from-to)3356-3365
JournalJournal of the Acoustical Society of America
Volume137
Issue number6
DOIs
Publication statusPublished - 2015
MoE publication typeA1 Journal article-refereed

Fingerprint

Dive into the research topics of 'Neural realignment of spatially separated sound components'. Together they form a unique fingerprint.

Cite this