NIDA - National Institute on Drug Abuse
PROJECT SUMMARY Vocalizations and other bioacoustic signals convey an individual’s identity, location, and behavioral state and these sounds are used to guide social-acoustic interactions. While much stellar research has been done on the neural basis of vocal communication, it has largely focused on brief, stereotyped interactions between unfamiliar adults in featureless test cages. Here, we propose to study auditory communication within a cohesive social unit, the family, of a highly social species, Mongolian gerbils, in a stable, naturalistic setting. By integrating audio, video, and neural data, we aim to uncover the behavioral meaning of distinctive vocal bouts and test how cortical mechanisms support the use of these vocalizations to make social decisions. In Aim 1, we will create and disseminate general-purpose machine learning tools for studying auditory communication within family groups living in undisturbed naturalistic environments. Aim 1A will establish multi-animal 3D body pose tracking in complex spacious environments. Aim 1B will develop deep learning methods to noninvasively localize and attribute vocalizations to specific family members, focusing on the multi-second vocal bouts that dominate auditory communication. Aim 1C will integrate body pose, bioacoustic signals, and neural data with a 3D environment model using Gaussian splatting, enabling us to estimate how auditory signals are combined with line-of-sight information. In Aim 1D, we collaborate with a team at Princeton to extend and validate sound attribution methods in a different species, environment, and behavioral paradigm (mouse courtship) to establish general-purpose and open-source tools for the field. Aim 2 develops a machine learning approach to extract low- dimensional behavioral descriptors from this large data set that are used as variables in our predictive models. In Aim 2A, we extract recurring bouts of acoustically driven behavior using a mixture of expert-supervised and data-driven, unsupervised approaches to annotate continuous streams of natural behavior. Computational innovations are proposed, and we employ playback and hearing attenuation manipulations to test causal relationships between vocal bouts and behavior. In Aim 2B, we use these latent features and contextual variables to build linear Bayesian models that predict future actions, permitting us to ask how individual behavioral traits are explained by social (e.g., kinship, sex, age) and environmental factors. Aim 2C will characterize how auditory cortex activity represents vocal bouts, social and environmental factors, and subsequent behavioral decisions. Aim 3 will test the cortical network mechanisms that give rise to social and contextual modulation during auditory communication. In Aim 3A we will silence auditory or frontal cortices, as well as projections between the two, and measure how these perturbations influence behavior. In Aim 3B, we will record wirelessly from frontal cortex during sound-driven social interactions.
Up to $855K
2030-12-31
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.
One-time $49 fee · Includes AI drafting + templates + PDF export
Dynamic Cognitive Phenotypes for Prediction of Mental Health Outcomes in Serious Mental Illness
NIMH - National Institute of Mental Health — up to $18.3M
COORDINATED FACILITIES REQUIREMENTS FOR FY25 - FACILITIES TO I
NCI - National Cancer Institute — up to $15.1M
Leveraging Artificial Intelligence to Predict Mental Health Risk among Youth Presenting to Rural Primary Care Clinics
NIMH - National Institute of Mental Health — up to $15.0M
Feasibility of Genomic Newborn Screening Through Public Health Laboratories
OD - NIH Office of the Director — up to $14.4M
WOMEN'S HEALTH INITIATIVE (WHI) CLINICAL COORDINATING CENTER - TASK AREA A AND A2
NHLBI - National Heart Lung and Blood Institute — up to $10.2M
Metal Exposures, Omics, and AD/ADRD risk in Diverse US Adults
NIA - National Institute on Aging — up to $10.2M