NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
NSF
Artificial intelligence systems are increasingly trained using datasets containing private information about individuals in critical areas such as government services, healthcare, and education. However, these AI systems have a demonstrated risk of accidentally revealing sensitive personal information about the people whose data was used during training, creating serious privacy and security concerns. This problem threatens public trust in AI technologies and creates barriers to beneficial uses of AI in sensitive domains where privacy protection is essential. Currently, many organizations cannot safely use AI because existing privacy protection methods are either inadequate or too difficult to implement correctly. This project addresses this challenge by developing freely available software tools that prevent these privacy vulnerabilities in future AI systems. These tools will make state-of-the-art privacy protection methods practical and accessible to a broad community of developers and researchers. This work serves the national interest by advancing privacy protection for all citizens, strengthening trust in AI technologies used by government and industry, supporting American competitiveness in privacy-preserving AI development, and enabling secure use of AI in critical national infrastructure while protecting individual rights. This project advances privacy-preserving machine learning by developing and implementing novel techniques for training large models with the strong protections of differential privacy and minimal overhead in computation and model performance. The research activities focus on incorporating tools for differentially private stochastic gradient descent into OpenDP, a community-driven open-source software project with a rigorous vetting process. First, these tools will integrate with Opacus, the open-source differentially private machine learning library developed by Meta, with OpenDP efforts strengthening the privacy guarantees offered by Opacus while making the library more easily usable by the OpenDP community. Second, the investigators will improve the efficiency and utility of differentially private stochastic gradient descent by optimizing the choice of noise distributions and their samplers. Third, the team will implement sophisticated privacy accountants to measure the protections of differentially private stochastic gradient descent as the algorithm runs. Finally, the investigators will develop and implement a framework in OpenDP for differentially private federated learning with precisely specified privacy guarantees. The project will explore applications in genomics and educational technology, demonstrating privacy-preserving data utilization across multiple domains while ensuring the trustworthiness of the software through community-driven development and expert vetting rather than reliance on single commercial entities. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Up to $800K
2028-09-30
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.
One-time $749 fee · Includes AI drafting + templates + PDF export