My research focuses within the area of psycholinguistics on speech perception, with a special emphasis on audiovisual speech perception. That is, I investigate how we process speech from hearing and seeing a speaker talk (lip-reading). We use this visual information not only to understand what the speaker says, but also to learn, for example, about a speaker's idiosyncratic way of speaking or to learn the meaning of novel words. Topics investigated are the consequences of aging on speech perception; individual differences; perceptual learning of speaker idiosyncrasies; the dynamics of audiovisual spoken-word recognition; learning of multisensory relationships; temporal cross-modal synchrony and binding; multisensory perception. These issues are addressed with a variety of methods (e.g., eye tracking, event-related potentials, motion tracking), testing young and older adults as well as infants and toddlers.