Projects per year
Abstract
Human speech has a unique capacity to carry and communicate rich meanings. However, it is not known how the highly dynamic and variable perceptual signal is mapped to existing linguistic and semantic representations. In this novel approach, we used the natural acoustic variability of sounds and mapped them to magnetoencephalography (MEG) data using physiologically-inspired machine-learning models. We aimed at determining how well the models, differing in their representation of temporal information, serve to decode and reconstruct spoken words from MEG recordings in 16 healthy volunteers. We discovered that dynamic time-locking of the cortical activation to the unfolding speech input is crucial for the encoding of the acoustic-phonetic features of speech. In contrast, time-locking was not highlighted in cortical processing of non-speech environmental sounds that conveyed the same meanings as the spoken words, including human-made sounds with temporal modulation content similar to speech. The amplitude envelope of the spoken words was particularly well reconstructed based on cortical evoked responses. Our results indicate that speech is encoded cortically with especially high temporal fidelity. This speech tracking by evoked responses may partly reflect the same underlying neural mechanism as the frequently reported entrainment of the cortical oscillations to the amplitude envelope of speech. Furthermore, the phoneme content was reflected in cortical evoked responses simultaneously with the spectrotemporal features, pointing to an instantaneous transformation of the unfolding acoustic features into linguistic representations during speech processing.
Original language | English |
---|---|
Article number | ENEURO.0475-19.2020 |
Pages (from-to) | 1-18 |
Number of pages | 18 |
Journal | eNeuro |
Volume | 7 |
Issue number | 4 |
DOIs | |
Publication status | Published - 1 Jul 2020 |
MoE publication type | A1 Journal article-refereed |
Keywords
- auditory system
- magnetoencephalography
- neural decoding
- speech processing
Fingerprint
Dive into the research topics of 'Dynamic Time-Locking Mechanism in the Cortical Representation of Spoken Words'. Together they form a unique fingerprint.Projects
- 2 Finished
-
-: Individual cortical markers of language function
Salmelin, R. (Principal investigator), Liljeström, M. (Project Member), Saarinen, T. (Project Member), Ghazaryan, G. (Project Member), Rinkinen, O. (Project Member), Hukari, A. (Project Member), Cotroneo, S. (Project Member) & Mäkelä, S. (Project Member)
01/09/2018 → 31/12/2022
Project: Academy of Finland: Other research funding
-
Dyslexia: genes, brain functions, interventions
Nora, A. (Project Member), Salmelin, R. (Principal investigator), Liljeström, M. (Project Member), Lindh-Knuutila, T. (Project Member), Kujala, J. (Project Member), Ghazaryan, G. (Project Member), Hakala, T. (Project Member) & Mäkelä, S. (Project Member)
01/09/2015 → 31/08/2019
Project: Academy of Finland: Other research funding
Equipment
-
Aalto Neuroimaging Infrastructure
Jousmäki, V. (Manager)
School of ScienceFacility/equipment: Facility
-
Press/Media
-
The human brain tracks speech more closely in time than other sounds
22/06/2020
1 item of Media coverage
Press/Media: Media appearance