IRCS Conference Room
Computational Visual Neuroscience Laboratory
University of Minnesota
A fully computable model of stimulus-driven and top-down effects in high-level visual cortex
Specific regions of ventral temporal cortex (VTC) appear to be specialized for the representation of certain visual categories: for example, the visual word form area (VWFA) for words and the fusiform face area (FFA) for faces. However, a computational understanding of how these regions process visual inputs is lacking. Here, we measure BOLD responses to a wide range of carefully controlled grayscale images while subjects performed different tasks. On the basis of these measurements, we develop a fully computable model of responses in VWFA and FFA. This model reveals how high-level representations are constructed from low-level stimulus properties, and shows that this bottom-up representation is scaled by the intraparietal sulcus in service of the behavioral goals of the subject. These results provide a unifying account of neural processing in VTC in the form of a model that addresses both bottom-up and top-down effects and quantitatively predicts VTC responses.