Rapid Invariant Encoding of Scene Layout in Human OPA

Research output: Contribution to journalArticleScientificpeer-review

Researchers

Research units

  • University of Cambridge
  • Columbia University
  • Western University

Abstract

Summary Successful visual navigation requires a sense of the geometry of the local environment. How do our brains extract this information from retinal images? Here we visually presented scenes with all possible combinations of five scene-bounding elements (left, right, and back walls; ceiling; floor) to human subjects during functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG). The fMRI response patterns in the scene-responsive occipital place area (OPA) reflected scene layout with invariance to changes in surface texture. This result contrasted sharply with the primary visual cortex (V1), which reflected low-level image features of the stimuli, and the parahippocampal place area (PPA), which showed better texture than layout decoding. MEG indicated that the texture-invariant scene layout representation is computed from visual input within ∼100 ms, suggesting a rapid computational mechanism. Taken together, these results suggest that the cortical representation underlying our instant sense of the environmental geometry is located in the OPA.

Details

Original languageEnglish
Pages (from-to)161-171.e3
JournalNeuron
Volume103
Issue number1
Publication statusPublished - 3 Jul 2019
MoE publication typeA1 Journal article-refereed

    Research areas

  • scene perception, spatial layout, scene elements, navigation, fMRI, MEG

ID: 33938759