Our research focuses on the origins of knowledge in humans. The past several decades have witnessed a blossoming in research on perceptual and cognitive development in infancy, and a view has emerged that infants take in far more information, and are more aware of their surroundings, than we often give them credit for. Advances in methods for tapping infant perception and knowledge provide compelling evidence that by the end of the first year after birth, infants seem to know many of the basics of the world around them: Objects tend to behave in certain ways (e.g., they persist when they are occluded), people interact with each other using language and gesture, and moving around and handling objects are good ways of obtaining more knowledge. Despite these advances, fundamental questions remain concerning how this state of knowledge comes to be. Our lab explores these questions with preferential looking and eye tracking paradigms, as well as connectionist modeling of developmental phenomena. Because the focus is on origins, we are less interested in participating in traditional “nature-nurture” debates (though we do it anyway) than in understanding and elucidating precise developmental mechanisms, no matter what they might be: endogenous prenatal or postnatal organization, the role of experience in shaping responses to recurring patterns, contributions of perceptual (i.e., low-level) skills to cognitive (i.e., high-level) functions, and so on. The question of origins of knowledge, then, lies at the intersection of developmental psychology, vision science, cognitive science, and developmental neurobiology. Our general view could be expressed as: Many “smart” mechanisms emerge from simple mechanisms, given the right environment. Infants are born equipped with rudimentary perceptual and learning skills and a handful of reflexes, but there is little evidence that anything resembling “knowledge” is available to neonates, beyond the ability to acquire information quickly and retain it over short intervals. Within several months, however, the situation is radically different. How are these changes best characterized? The goal of our research in exploring this question is first, to describe age-related changes in visual perception and early learning abilities, and then to explain the mechanisms that are responsible for these changes. To do this, we try to distill a question to its fundamental essence, see how and when infants respond to the simplest possible version of a cognitive challenge, and from there, develop new theory. We devote a lot of time and energy to the development of new methodological advances, such as the use of computer-controlled experiments and stimuli, and in recording eye movements in infants. Computer-controlled experiments give us precise management of stimulus generation and presentation. Recording eye movements is technically challenging but the resulting data are incomparable in their precision, and, I believe, bring us as closely as possible to what infants are thinking. We also use imaging techniques (functional magnetic resonance imaging with adults, electroencephalography with infants) in studies that explore cortical correlates of perception and learning, and computational models of perception. Evidence from our lab strongly suggests that much of the development of infant cognition can be explained with some simple learning mechanisms (e.g., associative learning), a time to observe the world, the ability to direct attention via eye movements, and the rapid pace of brain development known to occur from the initial formation of neurons (early in pregnancy) extending through the second year after birth.
The visual world is characterized by occlusion: objects that are nearer to an observer typically obscure objects that are farther away. Nevertheless we tend to perceive objects as whole and complete, rather than as fragments. This perceptual skill (known as perceptual completion) extends across space, in the case of partly visible objects, and it extends across time, in the case of objects that go out of sight and then come back into view. A fundamental question concerns the developmental origins of perceptual completion. Our research has revealed a similar developmental pattern in both kinds of perceptual completion: Initially in postnatal development, infants do not perceive completion, instead only seeing what is “directly” visible. Over the next several months, infants come to perceive objects as complete and coherent across space and time. How does this happen? Our findings suggest an important role for learning, experience, and self-directed exploration (via eye movements) in development of object perception and perceptual completion.
The environment is highly structured and largely redundant. There is considerable consistency from one point of visual space to the next, and from moment to moment very little actually changes in our surroundings. The sounds we hear, for example in speech, likewise occur in predictable sequences. To what extent are infants sensitive to visual and auditory sequences, and how does this sensitivity develop? Our research has documented an early sensitivity to predictable visual sequences, but more complex, abstract visual patterns (that require inductive processes) are far more difficult to acquire. Abstract patterns instantiated in speech, in contrast, are more easily learned, even though they would seem logically equivalent to the visual patterns. Experiments are underway that explore the reasons underlying these differences in learning.
Faces are a special class of visual stimulus to people in general, and infants in particular, and it has long been known that infants are especially interested in faces and learn familiar faces very quickly after birth. Much is also known about some features of faces, such as the orientation and configuration of face parts (eyes, nose, mouth) that are involved in face perception and recognition for infants and adults. Nevertheless, surprisingly little is known about how these recognition processes are made available to infants. We address this question by recording infants’ eye movements as they view faces under a variety of circumstances to reveal mechanisms that guide developments in face perception, recognition of familiar faces, and matches between facial and vocal emotional expression.
Much is known about developments in infant visual preferences, but far less is known about the mechanisms that underlie these preferences. We investigate this question with experiments that probe motion discrimination and fundamental characteristics of the oculomotor system, and how these systems change in early development. One current interest is the interplay between visual selective attention and inhibitory mechanisms; that is, the sequence of steps that determine where visual attention is directed, and inhibited. For this work, we use both behavioral and computational methods.
In the real world, objects and people are both seen and heard, and infants are sensitive to this intermodal information from early on. Current experiments investigate the question whether sensitivity to intermodal information supports cognitive development beyond what can be gained through unimodal presentation, in studies of object perception, face/voice perception, and spatiotemporal “binding.”