Skip to main content

CAREER: Encoding models for studying attentional and multisensory modulation of the human subcortical auditory system

NSF

open

About This Grant

The ability of humans to listen and converse in noisy places like bustling city streets or busy bars, is remarkable but also mysterious. How are listeners able to extract a clear signal from such loud background noise? The goal of this project is to try to understand how lower brain areas in the auditory pathway interact with higher auditory cortical areas in the brain when we try to communicate in acoustically challenging conditions. The lower parts of the auditory system, which are located in the brainstem and midbrain, process sounds and send information to higher level cortical areas to be analyzed and understood. The cortex also sends signals back down to the lower auditory centers—but the nature of these “top-down” signals and how they help us listen is not known. Remarkably, there are more top-down projections in the auditory system than “bottom-up” projections. This suggests that higher areas are shaping the incoming auditory information through these top-down projections. Understanding these top-down signals can help us understand why some people, even with normal hearing, struggle to understand what is being said to them when there is a lot of background noise. Untangling the role of these top-down signals can also inform our understanding of auditory perception and how much of it is driven by the external sounds out in the world, or shaped by internal information such expectation, or filtered by selective attention. In addition to the scientific goals, this project will include an integrated research, education, and outreach plan which will give high school students and undergraduates hands-on experiences in dynamic brain imaging as part of classes and summer outreach programs on auditory perception and auditory neuroscience. To deepen understanding of the role of the top-down projections, researchers will measure subjects’ brain activity using electroencephalography (EEG) while study participants listen to a variety of sounds including clicks, music and speech. The project will study how top-down attention affects subcortical auditory processing by playing two stories for subjects and asking them to attend to one of them, and then comparing the brain responses to the attended versus the unattended story. Researchers will also test how visual top-down signals affect subcortical auditory responses by comparing brain responses from subjects presented with recordings of speech where the video of the talker either matches the speech audio or is mismatched. Using a set of mathematical tools that model auditory processing called “encoding models,” it will be possible to analyze the contribution of top-down connections to help lower auditory areas to process the speech sounds. Researchers will take a big data approach and develop the next generation of encoding models by using a deep neural network approach to analyze massive amounts of brain data gathered from many EEG recordings of a small number of subjects over many weeks. This approach will give researchers more powerful tools to analyze aspects of auditory processing and attention. The use of these more sophisticated encoding models will illuminate the critical role lower auditory centers play in perception, reveal the strong impact of top-down signals on auditory perception, and more generally, the role that top-down signals play in all shaping perception from all sensory modalities. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Focus Areas

education

Eligibility

universitynonprofitsmall business

How to Apply

Funding Range

Up to $423K

Deadline

2027-08-31

Complexity
Medium
Start Application

One-time $749 fee · Includes AI drafting + templates + PDF export

AI Requirement Analysis

Detailed requirements not yet analyzed

Have the NOFO? Paste it below for AI-powered requirement analysis.

0 characters (min 50)