Skip to main content

Collaborative Research: III: Medium: A consolidated framework of computational privacy and machine learning

NSF

open

About This Grant

Machine learning has grown to increase prominence over the past years, finding applications in various domains from image and speech processing to disease diagnosis. Despite the great success of machine learning techniques, massive amounts of data are collected and used to train the machine learning models. The privacy of sensitive data has become a big concern. Existing efforts are still preliminary, and enormous challenges remain to be resolved. Crucially, stronger privacy protection guarantees often sacrifice important properties of machine learning models, such as predictive utility and fairness, which can be undesirable or completely unacceptable. This project develops a consolidated privacy protection framework for machine learning systems that comprehensively considers the optimal trade-offs between computational privacy and several critical properties of machine learning, including utility, fairness, and distributed learning. The project will provide a comprehensive set of tools to protect data privacy for real-world machine learning applications under different circumstances. The privacy-preserving techniques will have a transformative impact on machine learning systems used by various sectors, allowing companies and hospitals to enjoy the advantages of machine learning techniques on big data while protecting data privacy under corresponding regulations. The research project thoroughly examines and discusses the real-world complicacy or restrictions when applying differential privacy, from privacy-utility trade-off, privacy-fairness relation, privacy in distributed learning, to post-learning privacy protection. The framework developed by the project takes deep root in rigorous optimization frameworks, often accompanied by theoretical guarantees and aided by cutting-edge algorithmic tools such as meta-learning, adversarial learning, and federated learning. Besides, the framework carries the following methodological innovations: differential privacy tailored to learning problems; customized privacy addressing heterogeneity in collaborative learning; privacy-protection of learned models through unlearning; consolidated privacy and fairness in learning. Those efforts will significantly augment the practicality and scalability of differential privacy. The project will be systematically evaluated on various real-world medical applications, and the tools will be readily used to tackle critical challenges in medical research. The outcomes will be incorporated into multiple courses at both undergraduate and graduate levels. The research outcomes will be disseminated broadly and comprehensively through open-source software releases and workshops, the involvement of undergraduate research, and outreach to K-12 education, focusing on minorities and under-representative groups in STEM education. Students at different levels and disciplines, STEM and liberal arts, will be participating in the research on privacy and machine learning. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Focus Areas

machine learningeducation

Eligibility

universitynonprofitsmall business

How to Apply

Funding Range

Up to $242K

Deadline

2026-11-30

Complexity
Medium
Start Application

One-time $749 fee · Includes AI drafting + templates + PDF export

AI Requirement Analysis

Detailed requirements not yet analyzed

Have the NOFO? Paste it below for AI-powered requirement analysis.

0 characters (min 50)