NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
NSF
As cyber threats become increasingly sophisticated, ensuring the security of computer networks is more critical than ever. Intrusion Detection Systems (IDS) help identify potential cyberattacks, but many rely on Artificial Intelligence techniques that act as “black boxes,” making their decisions difficult for humans to understand or trust. Security analysts or users do not know what is going on “under the hood” in these models and do not understand the model's reasons for making predictions. This project aims to improve the transparency of these systems by developing a novel Explainable Artificial Intelligence (XAI) technique, and using that to develop Explainable Intrusion Detection Systems (X-IDS). The key idea is to incorporate the time-based patterns inherent in network security data. This will enable security professionals to better understand why an IDS flags certain activities as threats; improving trust, accountability, and decision-making in cybersecurity. By advancing explainable AI for time-sensitive security applications, this project supports national cybersecurity efforts and enhances the reliability of AI-driven defense mechanisms. In addition, we will develop and publish hands-on lab exercises for K-12 students related to the research. Our approach is to develop Temporal Eclectic Rule Extraction (TERE), a novel white-box XAI method for IDS. Unlike existing approaches that rely on black-box surrogate models, TERE will extract human-readable decision rules directly from internal neurons in temporal neural networks trained on network data. This will address a critical gap in explainability and trustability by ensuring that the temporal structure of network activity is preserved in the extracted rules, as network activity and attacks are performed using sequences of packets, providing more transparent and interpretable threat detection. A significant challenge in rule extraction is the computational complexity and number of generated rules. Additionally, traditional methods often produce large rule sets that are difficult for security analysts to interpret, limiting their practical use. Optimization strategies will be developed to reduce computational overhead. A key approach will involve the exploration of neural selection algorithms that efficiently identify relevant neurons for rule extraction, minimizing unnecessary computations. Further, techniques to streamline and compress extracted rules will be explored to enhance interpretability while maintaining accuracy. By integrating decision tree-based rule extraction with time-aware enhancements, this project aims to increase explainability and trustability in Explainable Intrusion Detection Systems (X-IDS). The proposed methods will be evaluated on large-scale intrusion detection datasets, assessing their ability to deliver highly accurate, explainable, and trustworthy explanations. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Up to $155K
2027-05-31
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.
One-time $49 fee · Includes AI drafting + templates + PDF export
Canada Foundation for Innovation — Innovation Fund
Canada Foundation for Innovation — up to $50M
Human Frontier Science Program 2025-2027
NSF — up to $21.2M
Entrepreneurial Fellowships to Enhance U.S. Competitiveness
NSF — up to $15.0M
MATERNAL, INFANT AND EARLY CHILDHOOD HOMEVISITING GRANT PROGRAM - PROJECT ADDRESS: 1500 JEFFERSON STREET SE, OLYMPIA, WA...
Department of Health and Human Services — up to $12.0M
MATERNAL, INFANT AND EARLY CHILDHOOD HOMEVISITING GRANT PROGRAM - PROJECT ABSTRACT PROJECT TITLE: MATERNAL, INFANT A...
Department of Health and Human Services — up to $10.9M
Canada Excellence Research Chairs (CERC)
Government of Canada — up to $10M