Gaze-direction-based MEG averaging during audiovisual speech perception

Lotta Hirvenkari, Veikko Jousmäki, Satu Lamminmäki, Veli-Matti Saarinen, Mikko E. Sams, Riitta Hari

Tutkimustuotos: LehtiartikkeliArticleScientificvertaisarvioitu

6 Sitaatiot (Scopus)
184 Lataukset (Pure)

Abstrakti

To take a step towards real-life-like experimental setups, we simultaneously recorded magnetoencephalographic (MEG) signals and subject’s gaze direction during audiovisual speech perception. The stimuli were utterances of /apa/ dubbed onto two side-by-side female faces articulating /apa/ (congruent) and /aka/ (incongruent) in synchrony, repeated once every 3 s. Subjects (N = 10) were free to decide which face they viewed, and responses were averaged to two categories according to the gaze direction. The right-hemisphere 100-ms response to the onset of the second vowel (N100m’) was a fifth smaller to incongruent than congruent stimuli. The results demonstrate the feasibility of realistic viewing conditions with gaze-based averaging of MEG signals.
AlkuperäiskieliEnglanti
Artikkeli17
Sivut1-7
Sivumäärä7
JulkaisuFrontiers in Human Neuroscience
Vuosikerta4
DOI - pysyväislinkit
TilaJulkaistu - 8 maalisk. 2010
OKM-julkaisutyyppiA1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä

Sormenjälki

Sukella tutkimusaiheisiin 'Gaze-direction-based MEG averaging during audiovisual speech perception'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.

Siteeraa tätä