Wu & Chen Auditorium (101 Levine Hall)
Cognitive Neuroscience Lab
Carnegie Mellon University
Face and word recognition: Flip sides of the same coin
A key issue that continues to generate controversy concerns the nature of the psychological, computational and neural mechanisms that support the visual recognition of objects such as faces and words. While some researchers claim that visual recognition is accomplished by category-specific modules dedicated to processing distinct object classes (faces in right hemisphere, words in left hemisphere), other researchers have argued for a more distributed system with only partially specialized cortical regions. Considerable evidence from both functional neuroimaging and neuropsychology would seem to favor the modular view, and yet close examination of those data reveal rather graded patterns of specialization that support a more distributed account. I will explore a theoretical middle ground in which the functional specialization of brain regions arises from general principles and constraints on neural representation and learning that operate throughout cortex but that nonetheless have distinct implications for different classes of stimuli. The account is supported by computational simulations and empirical evidence from a variety of studies (behavioral, evoked response potential, functional imaging) across different populations (children, adolescents and adults, left-handers and individuals with developmental dyslexia and those with congenital prosopagnosia) that illustrates how cooperative and competitive interactions in the formation of neural representations for faces and words account for both their shared and distinctive properties. The results are consistent with an account in which hemispheric lateralization is graded rather than binary and in which this graded organization emerges dynamically over the course of development.
This talk is part of the MINS Year of Cognition speaker series.