NSF AI Disclosure Required
NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
CAREER: Leveraging Large Language Models for Advanced Threat Protection in Cybersecurity
NSF
About This Grant
This project aims to transform how we protect computer systems against sophisticated targeted cyberattacks known as Advanced Persistent Threats (APTs). APTs are stealthy, multi-step attacks that can cause severe data breaches, undermining trust and security in both private and public sectors. Current defenses often suffer from alert fatigue and high false positives, struggle to adapt to constantly evolving threats, and cannot effectively harness rich threat intelligence information and the unique insights of human investigators. This project will harness the emerging power of large language models (LLMs), the critical technology behind recent advances in generative AI, to enhance various stages of cyber threat protection. By leveraging LLMs' ability to understand and generate human language, the project will enable faster and more accurate detection and investigation of threats, better integration of external knowledge, and smarter collaboration between human experts and automated defenses. The outcomes of this work promise to safeguard individual privacy, strengthen national security, and reduce the massive costs of data breaches and cyber incidents. To achieve this, this research will develop an LLM-powered framework for intelligent, knowledge-enhanced, context-aware, and human-inspired cyber threat protection across the full cyber defense lifecycle. The work is organized into three integrated thrusts. Thrust 1 focuses on extracting dynamic threat knowledge from cyber threat intelligence (CTI) reports using LLM in-context learning to construct a high-quality cybersecurity knowledge graph. Thrust 2 designs scalable, knowledge-enhanced threat detection techniques by integrating LLM-derived CTI knowledge from Thrust 1 into provenance-based intrusion detection systems, which leverage system provenance (the detailed records of system-level interactions and data flows) to accurately reconstruct and analyze complex attack sequences. This thrust addresses challenges such as concept drift and high false positives by combining rich external threat knowledge with system-level context. Thrust 3 enhances post-compromise threat investigation by enabling human-LLM collaboration, including developing a new domain-specific investigation language, automatic investigation query synthesis, and a proactive LLM agent to assist human analysts in uncovering complex attack sequences while integrating external security intelligence. The project will evaluate its methods using large-scale security datasets and collaborate with industry partners to ensure real-world relevance. Overall, this research aims to advance the state of the art in cyber threat defense and demonstrate the transformative potential of generative AI technologies for cybersecurity. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Focus Areas
Eligibility
How to Apply
Up to $370K
2030-09-30
One-time $749 fee · Includes AI drafting + templates + PDF export
AI Requirement Analysis
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.