Cortical operational synchrony during audio-visual speech integration

Alexander A. Fingelkurts, Christina M. Krause, Riikka Möttönen, Mikko Sams

Research output: Contribution to journalArticleScientificpeer-review

50 Citations (Scopus)


Information from different sensory modalities is processed in different cortical regions. However, our daily perception is based on the overall impression resulting from the integration of information from multiple sensory modalities. At present it is not known how the human brain integrates information from different modalities into a unified percept. Using a robust phenomenon known as the McGurk effect it was shown in the present study that audio-visual synthesis takes place within a distributed and dynamic cortical networks with emergent properties. Various cortical sites within these networks interact with each other by means of so-called operational synchrony (Kaplan, Fingelkurts, Fingelkurts, & Darkhovsky, 1997). The temporal synchronization of cortical operations processing unimodal stimuli at different cortical sites reveals the importance of the temporal features of auditory and visual stimuli for audio-visual speech integration.

Original languageEnglish
Pages (from-to)297-312
Number of pages16
JournalBrain and Language
Issue number2
Publication statusPublished - May 2003
MoE publication typeA1 Journal article-refereed


  • Audio-visual
  • Crossmodal
  • Large-scale networks
  • MEG
  • Multisensory integration
  • Operations
  • Synchronization


Dive into the research topics of 'Cortical operational synchrony during audio-visual speech integration'. Together they form a unique fingerprint.

Cite this