Research

Funded Research Projects

Major Research Areas

Auditory Plasticity and Auditory Expertise

brain-stim-language-learningWe want to understand brain plasticity. How does the brain change as a result of its experiences? The brain can change how it processes information very fast. We want to understand how these processes work and how they support speech perception, reading, and language comprehension. Can we harness the power of brain plasticity to help people who struggle with reading development or language learning? Can we understand the difference between successful and less-successful learners through the structure and function of their brains?

Phonetic Variability in Speech Perception

spect3We want to understand the cognitive consequences of phonetic variability in speech perception. When we listen to speech, we almost never hear the exact same stimulus more than once. Myriad factors combine to render the speech signal both immensely complex and immensely variable. Idiosyncratic anatomical, physiological, and cultural differences render the phonetic realization of speech different from person to person. The same words spoken by a single individual will differ immensely in their phonetics based on context, environment, audience, etc. All of this variability presents a challenge for the neural systems processing speech: How do we recognize consistent messages in the presence of varying phonetics?  Interestingly, some research also suggests that experiencing all this variability is a crucial part of language learning. We want to understand how and when variability sometimes facilitates and sometimes complicates speech perception.
[/collapsible]

Language and Reading Development and Disorders

ctl-dys-adaptWe want to understand what changes occur in the brain during the acquisition of language and literacy, and how these processes differ for individuals who struggle to develop typical reading or language abilities. Between 5-15% of children struggle to develop typical reading abilities, the hallmark of a disorder known as developmental dyslexia. Scientific research, including work from our laboratory, has strongly suggested that the underlying difficulty in dyslexia stems from a difference in how these individuals’ brains represent and process the sounds of language. Our research attempts to identify what exactly the source of this difference is. Is there something distinct about the processes underlying either rapid or long-term auditory plasticity in the brains of individuals with dyslexia? If the source of reading difficulty is in representing and processing the sounds of language, why do individuals with dyslexia struggle to learn to read, but not struggle to learn to understand speech? Using advanced brain imaging and neuromodulation techniques, can we develop quantitative biomarkers for the diagnosis or remediation of developmental communication disorders like dyslexia?
[/collapsible]

Voice Recognition and Talker Identification

fmri-dti

We want to understand how people recognize talkers by the sound of their voice. Talker identification (voice recognition) is an important social auditory skill. When people are talking in a group, how do they keep track of who said what? How can you know who called your name before you see them? When you answer the phone, how can you tell who is on the line? Our research has shown that voice recognition interacts with language processing in interesting and complicated ways. People are more accurate at recognizing voices when they can understand the language being spoken, and have trouble recognizing voices when they can’t understand what is being said. Intriguingly, individuals with dyslexia don’t appear to experience this language familiarity effect in talker identification: Unlike their peers, individuals with dyslexia don’t get any benefit in voice recognition from understanding what is being said. Why is voice recognition impaired in dyslexia? What are the processes by which people learn, remember, and recognize voices? Is there anything special about familiar voices, like those of our friends, family, or famous actors? In what ways does voice recognition depend on language processing, and in what ways is it independent?  Are there parts of the brain that care about voices without caring about speech?