Light Works image by Sheila Pinkle



Lecture Archives

Susan Carey - March 28, 2014
The Origin of Concepts: Natural Number

Abstract: Alone among animals, humans can ponder the causes and cures of pancreatic cancer or global warming. How are we to account for the human capacity to create concepts such as electron, cancer, infinity, galaxy, and democracy?

A theory of conceptual development must have three components. First, it must characterize the innate representational repertoire—that is, the representations that subsequent learning processes utilize. Second, it must describe how the initial stock of representations differs from the adult conceptual system. Third, it must characterize the learning mechanisms that achieve the transformation of the initial into the final state. I defend three theses. With respect to the initial state, contrary to historically important thinkers such as the British empiricists, Quine, and Piaget, as well as many contemporary scientists, the innate stock of primitives is not limited to sensory, perceptual or sensory-motor representations; rather, there are also innate conceptual representations. With respect to developmental change, contrary to “continuity theorists” such as Fodor, Pinker, Macnamara and others, conceptual development involves qualitative change, resulting in systems of representation that are more powerful than and sometimes incommensurable with those from which they are built. With respect to a learning mechanism that achieves conceptual discontinuity, I offer Quinian bootstrapping.

I take on two of Fodor's challenges to cognitive science: 1) I show how (and in what ways) learning can lead to increases in expressive power and 2) I show how to defeat mad dog concept nativism. I challenge Fodor's claims that all learning is hypothesis testing, and that the only way new concepts can be constructed is by assembling them from developmental primitives, using the combinatorial machinery of the syntax of the language of thought. These points are illustrated through a case study of the origin of representations of natural number.

David Pesetsky - April 5, 2013
Language and Music: same structures, different building blocks

Abstract: Is there a special kinship between music and language? Both are complex, law-governed cognitive systems, Both are universal across the human species, but show some variation from culture to culture. Do the similarities run deeper than this? Although there is a rich tradition of speculation on this question, the current consensus among researchers is quite cautious. In this talk (presenting joint work with Jonah Katz), I will offer a linguist's perspective on the issue -- and argue against the cautious consensus. Though the formal properties of music and language do differ, I will propose that these differences reflect what is obvious: that the fundamental building blocks of language and music are different (for example: words vs. pitches). In all other respects, however -- what they do with these building blocks -- language and music are identical.

Bio: David Pesetsky is Ferrari P. Ward Professor of Linguistics and MacVicar Faculty Fellow at the Massachusetts Institute of Technology, where he heads the Linguistics Section of the Department of Linguistics and Philosophy. Pesetsky received his B.A. from Yale in 1977, and his Ph.D. in linguistics from MIT in 1983. Before coming to MIT as a professor in 1988, he taught at the University of Southern California (1982-1983) and at the University of Massachusetts at Amherst (1983-1988). Pesetsky's research focuses on syntax and the implications of syntactic theory for related areas such as language acquisition, semantics, phonology and morphology (word-structure). Many of his papers concern the structure of Russian, an language of special interest. Most recently, he has begun a collaborative investigation into the syntax of music and its relation to the syntax of language. (In his extra-curricular life, he is also principal second violinist of the New Philharmonia Orchestra of Massachusetts.) His publications include the books Zero Syntax (1994), Phrasal Movement and its Kin (2000), Russian Case Morphology and the Syntactic Categories (2013), and numerous published papers. He is a Fellow of the American Association for the Advancement of Science (2011) and a Fellow of the Linguistic Society of America (2012).

Joshua Tenenbaum - March 23, 2012
Modeling common-sense reasoning with probabilistic programs

Artificial intelligence (AI) has made great strides over its 60 year history, building computer systems with abilities to perceive, reason, learn and communicate that come increasingly close to human capacities. Yet there is still a huge gap. Even the best current AI systems make mistakes in reasoning that no normal human child would ever make, because they seem to lack a basic common-sense understanding of the world: an understanding of how physical objects move and interact with each other, how and why people act as they do, and how people interact with objects, their environment and other people to achieve their goals. I will talk about recent efforts to capture these core aspects of human common sense in computational models that can be compared with the judgments of both adults and young children in precise quantitative experiments, and used for building more human-like AI systems.

These models of intuitive physics and intuitive psychology take the form of "probabilistic programs": probabilistic generative models defined not over graphs, as in many current AI and machine learning systems, but over programs whose execution traces describe the causal processes giving rise to the behavior of physical objects and intentional agents. Perceiving, reasoning, predicting, and learning in these common-sense physical and psychological domains can then all be characterized as approximate forms of Bayesian inference over probabilistic programs.

Patricia Smith Churchland - March 25, 2011
How the Mind Makes Morals

Self-preservation is embodied in our brain's circuitry: we seek food when hungry, warmth when cold, and sex when lusty. In the evolution of the mammalian brain, circuitry for regulating one's own survival and well-being was modified. For sociality, the important result was that the ambit of me extends to include others -- me-and-mine. Offspring, mates, and kin came to be embraced in the sphere of me-ness; we nurture them, fight off threats to them, keep them warm and safe. The brain knows these others are not me, but if I am attached to them, they fire-up me-ness circuitry, motivating other-care that resembles self-care. In some species, including humans, seeing to the well-being of others may extend, though less intensely, to include friends, business contacts or even strangers, in an ever-widening circle. Oxytocin, an ancient body-and-brain molecule, is at the hub of the intricate neural adaptations sustaining mammalian sociality. Not acting alone, oxytocin works with other hormones and neurotransmitters and structural adaptation. Among its many roles, oxytocin decreases the stress response, making possible the friendly, trusting interactions typical of life in social mammals. I can let my guard down when I know I am among trusted family and friends.

Two additional interconnected evolutionary changes are crucial for mammalian sociality/morality: first, modifications to the reptilian pain system that, when elaborated, yield the capacity to evaluate and predict what others will feel and do, and notably in humans, also what others want, see, and believe. Anticipating events painful to me-and-mine is more efficient when brains can represent others as having sensations and intentions, regardless of assorted contingencies in behavior and background conditions. Cortical and subcortical modifications also led to a greater capacity for remembering specific events – storing for recall the reputations of assorted others; who cannot be trusted, and who can. Second, especially owing to the expansion of the frontal brain, an enhanced capacity to learn, underscored by social pain and social pleasure, allowed acquisition of the clan's social practices, however subtle and convoluted. Increased capacity for impulse control is another feature of frontal brain expansion. Social benefits are accompanied by social demands; we have to get along, but not put up with too much. Hence impulse control – being aggressive or compassionate or indulgent at the right time – is hugely advantageous.

Patricia Kuhl - April 16, 2010
Cracking the Speech Code: Language and the Infant Brain

Some of the most revolutionary ideas in brain science are coming from cribs and nurseries. In this talk I will focus on new discoveries about early learning and the neural coding of learned information with special attention to language. Infants are born “citizens of the world” and can acquire any language easily. Until the age of 6 months, they discriminate the phonetic contrasts of all languages, something their parents are unable to do. By the end of the first year of life, infants show nascent specialization. Neural sensitivity to native-language phonetic units increases while the ability to discern phonetic differences in other languages declines. Research on infants shows they “crack the speech code” using computational skills, but also that social interaction plays a significant role in the process. Early precursors to language in typically developing infants are leading to the identification of children at risk for developmental disabilities involving language, such as children with autism. In the next decade, the techniques of modern neuroscience will play a significant role in our understanding of the neurobiology of language acquisition, and perhaps reveal principles of how children learn more generally.

Alvaro Pascual-Leone - March 20, 2009
Modifying Descision Making

In recent years, dual-process theories that contrast automated and controlled processes have been put forward to explain different areas of human cognition. In this context, will-power refers to goal-driven cognitive control or regulation of impulses, passions, cravings, and habits. Such regulation may be conceptualized as cognitive control over the balance between a “cool”, reflective mental system that effortfully represents rational and reasoned goals, such as long-term mental and physical health, and a “hot”, reflexic mental system that automatically guides quick, impulsive, and emotional responses to environmental stimuli.

In recent years, lesion and functional neuroimaging studies suggest that the prefrontal cortex is a critical component of the neural circuitry engaged when people voluntarily and consciously regulate their behavior. In addition to neuroimaging studies, lesion studies suggest that particularly the right prefrontal cortex plays a central role in behavioral regulation and the control of impulsive, reflexic tendencies.

Modulation of will-power and dual-process theories offer a valuable framework that can serve to guide translational insights from cognitive neuroscience into the clinic. Proof-of-principle studies reveal that noninvasive brain stimulation of the dorsolateral prefrontal cortex with repetitive transcranial magnetic stimulation or transcranial direct current stimulation can influence decision-making, enhance will-power and promote reflective processes in healthy subjects. The same type of noninvasive brain stimulation can suppress alcohol, cocaine, nicotine and even food craving in patients, who are known to have impaired decision-making behaviors. Modulation of decision making, and enhanced cognitive regulation of emotion, reward, and gratification could have widespread mental and physical health benefits, including mood disorders, anxiety, ADHD, PTSD, substance abuse, smoking, and obesity.

Christof Koch - March 28, 2008
The Biology of Consciousness

Half a century ago, many did not think it was possible to understand the secret of life. Then two scientists, Jim Watson and Francis Crick, discovered the structure of DNA, forever changing biology and the way we view ourselves in the natural order of things. We are now once again facing a similar pursuit in determining the material basis of the conscious mind. Consciousness is one of the major unsolved problem in science today. How do the salty taste and crunchy texture of potato chips, the unmistakable smell of dogs after they have been in the rain, or the awfulness of a throbbing tooth pain, emerge from networks of neurons and their associated synaptic and molecular processes?

I will summarizes what is known about the biology and neurology of consciousness, outline the limits to our knowledge, and describe ongoing experiments using visual illusions to manipulate the relationship between physical stimuli and their associated conscious percepts. I will introduce the audience to the modern, empirical program to discover and characterize the neuronal correlates of consciousness (NCC),. I will conclude by discussing the limitations of a scientific approach to consciousness.

William Bialek - April 20, 2007
Searching for simple models

The world is a complicated place. In trying to understand it, physics often has made progress by focusing on simple model problems, places where we can build our intuition before tackling the full complexity of nature. Can we do this is cognitive science, or does putting "simple" and "cognitive" in the same sentence already mean that we are talking nonsense?

In this lecture I'll look at how a small corner of the fly's brain makes sense out of the visual world, and at how networks of neurons in the salamander retina cooperate to generate surprising collective behavior. These systems certainly are simpler than the human brain, and this provides us with an opportunity to push our understanding much further, to the point where we really have a mathematical theory for what is going on rather than just parameterized models. Within this theoretical framework, we can see, at least tentatively, how to bridge the gap from the simpler systems to the more challenging problems of cognition and the dynamics of large networks in the cortex. I'll emphasize the concrete predictions that we can make, and perhaps most importantly I'll argue that the lessons from simpler systems point to new kinds of experiments that we should be doing in order to give a deeper characterization of cognitive function.

Elissa L. Newport - March 17, 2006
How children shape languages: Language acquisition and the emergence of signed and spoken languages

As human children and adults learn their native languages, two apparently distinct phenomena occur. First, children (and, to some degree, adults) are remarkably adept at learning the details of the particular language to which they are exposed. Children exposed to English learn English, while those exposed to Japanese learn Japanese. To account for how they do this, Richard Aslin and I have been developing an approach to language acquisition known as "statistical learning." Our basic idea is that human language acquisition involves naturally and unconsciously computing, over a stream of speech, such things as how frequently sounds co-occur, how frequently words occur in similar contexts, and the like. Learners use these computations to determine regular versus accidental properties of the language and to learn its rules. Our studies show that adults, infants, and even nonhuman primates perform such computations online and with remarkable speed, on both speech and nonspeech materials.

At the same time, children do not always acquire what they are exposed to: under certain circumstances, they reliably change languages. Studies of the emergence of Nicaraguan Sign Language, as well as other signed and spoken languages, suggests that children are a prime force in developing and expanding languages. Our research shows that this phenomenon can also be incorporated into an understanding of statistical learning. Even in the lab, given certain types of input, learners reliably change the patterns of the language; and children do this strikingly more often than adults.

Taken together, our studies of language acquisition under natural and laboratory circumstances are beginning to help us understand how children learn and also create languages.

Colin Camerer - February 25, 2005
Behavioral Game Theory

Colin Camerer's specialty in the last few years has been "behavioral game theory", a subfield (or "franchise") of behavioral economics which uses experimental evidence to establish how psychological limits on the ability to make calculations and plan ahead, the way in which people react to fairness, and learning from experience, influence behavior in situations described by "game theory". Game theory is a mathematical analysis of any social situation in which one player-- typically a person, but possibly a firm or nation-- tries to figure out what other players will do, and choose the best strategy given those guesses about others. Most game theory describes the fictional behavior of an ideal, hypercalculating, emotionless player (like Dr. Spock from Star Trek) and, as a result, is not always a good guide to how normal people who don't plan too far ahead will actually behave. My 2003 book Behavioral Game Theory describes hundreds of different experimental studies which show where game theory predicts well and predicts poorly, and suggests some new kinds of theory. Behavioral game theory gives precise predictions about how people who think only a couple of steps ahead, have both guilt and envy toward others, and learn from experience, are likely to behave.

Ray Jackendoff - February 20, 2004
Toward a Cognitive Science of Culture and Society

It is a commonplace nowadays to speak of social categories and cultural institutions as being "socially constructed." This lecture will explore the question of what it takes for an individual to participate in such "socially constructed" entities -- what kind of mind one has to have in order to live in a human society. The question has strong parallels with contemporary linguistics, which investigates the cognitive structure required to be a speaker of a human language (another "socially constructed" entity). As in the case of language, the study of social cognition is consistent both with acknowledging cultural diversity and with a substantive universal cognitive framework that underlies the ability to learn and function in one's culture. The inquiry leads quite directly to connections with issues in anthropology, primatology, evolutionary psychology, legal and moral philosophy, economics, and religion, as well as more traditional cognitive sciences such as linguistics and developmental psychology.

Geoffrey Hinton - March 7, 2003
Learning Representations by Unlearning Beliefs

Neural networks need to learn good ways of representing the data they receive from sensors. When there is no teacher to specify how each sensory input ought to be represented, a network can learn useful features and constraints by discovering combinations of sensor values that occur much more often or much less often than might be expected. Using its features and constraints, the network can associate a "surprise level" with any pattern of sensory values. A good set of features and constraints is one that associates a low surprise level with real sensory data and a high surprise level with all other possible combinations of sensor values.

It is very hard to evaluate a proposed set of features and constraints because all possible combinations of sensor values must be considered. Nevertheless, there is a simple way of improving the network's current features and constraints. Starting with a real pattern of sensor values, the network makes many small adjustments to the data to make it less surprising. Having corrupted the data to fit in with its current beliefs, the network then modifies its features and constraints to make the real data less surprising and the corrupted data more surprising. I shall show some examples of this learning procedure in action.

Stanislas Dehaene - March 7, 2002
Cerebral Bases of the Number Sense in the Parietal Lobe

What are the origins of the human sense of numbers and arithmetic? Parietal cortex is systematically activated whenever we calculate, and its lesion can cause severe deficits of number manipulation. The human parietal lobe may therefore contain a category-specific representation of numerical quantity. An alternative possibility, however, is that it is merely involved in generic visuo-spatial and/or linguistic processes that are not specific to the number domain. To address this issue, several fMRI experiments will be presented. Those experiments reveal a systematic map of visuo-spatial, language, and calculation activations in the parietal lobe, amongst which an area in the middle intraparietal sulcus responds solely during number processing, not to other linguistic or spatial tasks. This region shows notation-independent subliminal quantity priming, suggesting that it unconscious encodes the quantity meaning of numbers. A new model is proposed according to which a quantity-specific area of the intraparietal sulcus interacts with other regions of the parietal lobe involved in phonological and spatial attention processes. The model suggests an evolutionary expansion of the inferior parietal lobule in humans, and helps understand the patterns of adult and developmental dyscalculia.

Martin Nowak - February 9, 2001
Evolution of Language: From Animal Communication to Universal Grammar

Abstract Not Available

Elizabeth Spelke - January 21, 2000
Core Knowledge and Cognitive Development

This talk explores three old-fashioned ideas about human thinking: that it differs qualitatively from that of all other animals, that it increases in scope and power as children learn, and that it depends on language. Although decades of research and argument have cast doubt on these ideas, I will defend them all, drawing first on studies of spatial memory and navigation and then on studies of number and arithmetic in animals, children, and human adults. These studies suggest that a critical ingredient in human intelligence is our promiscuous capacity to combine old concepts in new ways, and that natural languages provide the medium for these combinations.

Daniel Dennett - October 2, 1998
Things About Things

Every science thrives on oversimplifications, and cognitive science is no exception. What makes cognitive science cognitive is its assumption of states or processes or events that exhibit intentionality or aboutness, but some of the most popular idealizations that attempt to capture aboutness turn out to be false friends. There are alternative things about things that offer somewhat better prospects.

The Benjamin and Anne A. Pinkel Endowed Lecture Fund was established though a generous gift from Sheila Pinkel on behalf of the estate of her parents, Benjamin and Anne A. Pinkel, and serves as a memorial tribute to the lives of her parents. Benjamin Pinkel, who received a BSE in Electrical Engineering from the University of Pennsylvania in 1930, was actively interested in the philosophy of the mind and published a monograph on the subject, Consciousness, Matter, and Energy: The Emergence of Mind in Nature, in 1992, the objective of which is a "re-examination of the mind-body problem in the light scientific information." The lecture series is intended to advance the discussion and rigorous study of the deep questions which engaged Dr. Pinkel's investigations.

University of Pennsylvania