Gaze-direction-based MEG averaging during audiovisual speech perception

Lotta Hirvenkari, Veikko Jousmäki, Satu Lamminmäki, Veli-Matti Saarinen, Mikko E. Sams, Riitta Hari

Research output: Contribution to journalArticleScientificpeer-review

6 Citations (Scopus)
181 Downloads (Pure)

Abstract

To take a step towards real-life-like experimental setups, we simultaneously recorded magnetoencephalographic (MEG) signals and subject’s gaze direction during audiovisual speech perception. The stimuli were utterances of /apa/ dubbed onto two side-by-side female faces articulating /apa/ (congruent) and /aka/ (incongruent) in synchrony, repeated once every 3 s. Subjects (N = 10) were free to decide which face they viewed, and responses were averaged to two categories according to the gaze direction. The right-hemisphere 100-ms response to the onset of the second vowel (N100m’) was a fifth smaller to incongruent than congruent stimuli. The results demonstrate the feasibility of realistic viewing conditions with gaze-based averaging of MEG signals.
Original languageEnglish
Article number17
Pages (from-to)1-7
Number of pages7
JournalFrontiers in Human Neuroscience
Volume4
DOIs
Publication statusPublished - 8 Mar 2010
MoE publication typeA1 Journal article-refereed

Keywords

  • auditory cortex
  • eye tracking
  • human
  • magnetoencephalography
  • McGurk illusion

Fingerprint

Dive into the research topics of 'Gaze-direction-based MEG averaging during audiovisual speech perception'. Together they form a unique fingerprint.

Cite this