Abstract
During natural speech perception, listeners must track the global speaking rate, that is, the overall rate of incoming linguistic information, as well as transient, local speaking rate variations occurring within the global speaking rate. Here, we address the hypothesis that this tracking mechanism is achieved through coupling of cortical signals to the amplitude envelope of the perceived acoustic speech signals. Cortical signals were recorded with magnetoencephalography (MEG) while participants perceived spontaneously produced speech stimuli at three global speaking rates (slow, normal/ habitual, and fast). Inherently to spontaneously produced speech, these stimuli also featured local variations in speaking rate. The coupling between cortical and acoustic speech signals was evaluated using audio–MEG coherence. Modulations in audio–MEG coherence spatially dif- ferentiated between tracking of global speaking rate, highlighting the temporal cortex bilaterally and the right parietal cortex, and sensitivity to local speaking rate variations, emphasizing the left parietal cortex. Cortical tuning to the temporal structure of natural connected speech thus seems to require the joint contribution of both auditory and parietal regions. These findings suggest that cortical tuning to speech rhythm operates on two functionally distinct levels: one encoding the global rhythmic structure of speech and the other associated with online, rapidly evolving temporal predictions. Thus, it may be proposed that speech perception is shaped by evolutionary tuning, a preference for certain speaking rates, and predictive tuning, associated with cortical tracking of the constantly changing-rate of linguistic information in a speech stream.
| Original language | English |
|---|---|
| Pages (from-to) | 1704-1719 |
| Number of pages | 16 |
| Journal | Journal of Cognitive Neuroscience |
| Volume | 30 |
| Issue number | 11 |
| DOIs | |
| Publication status | Published - 1 Jan 2018 |
| MoE publication type | A1 Journal article-refereed |
Funding
This work was financially supported by the Academy of Finland (Grants 255349, 256459, and 283071 to R. S. and Grant 257576 to J. K.), the Alfred Kordelin Foundation (Grant 160143 to A. A.), the Emil Aaltonen Foundation (Grant 170011 N1 to A. A.), the Finnish Cultural Foundation (Grant 00170944 to T. S.), and the Sigrid Jusélius Foundation (grant to R. S.). MEG and MRI data were recorded at the Aalto NeuroImaging research infrastructure.
Fingerprint
Dive into the research topics of 'Cortical tracking of global and local variations of speech rhythm during connected natural speech perception'. Together they form a unique fingerprint.Equipment
-
Aalto Neuroimaging Infrastructure
Jousmäki, V. (Manager)
School of ScienceFacility/equipment: Facility