NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
NSF
This project’s goal is to improve the robustness of machine learning models used to detect malicious software, or “malware”. Computers are constantly vulnerable to malware, which can steal sensitive information, disrupt operations, or damage systems. Detecting and preventing malware is a significant challenge, and although recent advancements in deep learning have made it easier to identify potential malware threats, many deep learning models operate as "black boxes" that provide little information about how they work. This lack of information makes it hard to understand what about the software caused the model to flag it as malware or not. This in turn may make extra work for software developers who have to check legitimate software flagged as malware, or assess the risks posed by new, evolving malware that the models can’t yet detect. Therefore, improving the interpretability and reliability of malware detection systems is crucial to efficiently making software safer. This project aims to develop a robust framework for improving malware detection by identifying the key features learned by deep learning models, then generating new malware samples that might fool models with respect to those features. First, the project seeks to understand what features are learned and extracted by different neural network architectures, particularly those designed for image, sequence, and graph-based inputs. The goal is to gain insights into the representations and decision-making processes of various neural network models applied to diverse input domains. Second, the project explores whether new malicious binaries can be generated that evade detection models by strategically manipulating code based on the features used to identify malicious programs. This will involve developing techniques to create adversarial malware instances that can bypass current defense mechanisms. Third, the project will investigate how different binary rewriting techniques affect the performance of various neural network models. By analyzing the impact of these modifications, the study aims to improve the robustness of malware detection systems. To achieve these objectives, a reinforcement learning-based approach will be used to modify raw binary code in a way that allows it to evade detection by multiple deep learning models. Once adversarial malware is generated, the modified code will be analyzed to uncover patterns that distinguish different malware families and explain why certain modifications are successful in evading detection. Ultimately, the project will contribute to the broader field of malware analysis by openly sharing the malware dataset, techniques, source code, and generated adversarial datasets with the research community. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Up to $174K
2027-09-30
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.
One-time $49 fee · Includes AI drafting + templates + PDF export
Research Infrastructure: National Geophysical Facility (NGF): Advancing Earth Science Capabilities through Innovation - EAR Scope
NSF — up to $26.6M
AmLight: The Next Frontier Towards Discovery in the Americas and Africa
NSF — up to $9M
CREST Phase II Center for Complex Materials Design
NSF — up to $7.5M
EPSCoR CREST Phase I: Center for Energy Technologies
NSF — up to $7.5M
EPSCoR CREST Phase I: Center for Post-Transcriptional Regulation
NSF — up to $7.5M
EPSCoR CREST Phase I: Center for Semiconductors Research
NSF — up to $7.5M