The basic goal of my research is to investigate how humans learn and reason, and how intelligent machines might emulate them. In tasks that arise both in childhood (e.g., perceptual learning and language acquisition) and in adulthood (e.g., action understanding and causal inference), humans often paradoxically succeed in making inferences from inadequate data. The data available are often sparse (very few examples), ambiguous (multiple possible interpretations), and noisy (low signal-to-noise ratio). How can an intelligent system cope?
I approach this basic question as it arises in both perception and higher cognition. My research is highly interdisciplinary, integrating theories and methods from psychology, statistics, computer vision, machine learning, and computational neuroscience. Predictions derived from models are used to guide the design of experimental tests of perceptual and cognitive theories. The unified picture emerging from my work is that the power of human inference depends on two basic principles. First, people exploit generic priors — tacit general assumptions about the way the world works, which guide learning and inference from observed data. Second, people have a capacity to generate and manipulate structured representations— representations organized around distinct roles, such as multiple joints in motion with respect to one another in action perception, or the more abstract roles of cause and effect. My current areas of active study include motion perception, action recognition, object recognition, causal learning, and analogical reasoning.