My research broadly concerns how people recover the meaning of spoken language as it unfolds in real-time, and in particular how they cope with the vast amount of variability in the speech signal. To this end, we use techniques like head-mounted eye-tracking, event-related potentials and inter-cranial brain recording to examine some of the earliest sensory representations of speech. I apply both developmental and individual differences approaches to this question examining typically developing infants and children, as well as those with language or hearing-impairments, particularly children with specific language impairment or who use cochlear implants. Finally, in order to understand possible mechanisms I construct computational models using statistical learning, neural network and dynamical systems approaches.
- Cognitive neuroscience
- Autism and intellectual disabilities
- Hearing loss
- Developmental language disorder and Dyslexia
- Auditory system
- Developmental neuroscience
- Auditory neuroscience