NSF AI Disclosure Required
NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
SBIR Phase I: Counteracting Social Engineering Attacks with Honeypot LLM Chatbots
NSF
About This Grant
The broader impact of this Small Business Innovation Research (SBIR) Phase I project lies in its potential to significantly reduce the economic and emotional harm caused by social engineering cyber-attacks, which manipulate trust to collectively defraud millions of Americans of tens of billions of dollars each year. The growing asymmetry in the cost of executing such attacks, which are conducted by organized crime and nation-state actors, versus defending against them has created a critical vulnerability that not only threatens individuals, but also technology companies, financial institutions, and the national security of the United States. This project uses defensive artificial intelligence technology to address this imbalance by intercepting, tracing, and aggregating the largest source of information about social engineering attacks as they happen, providing a valuable, real time data stream for the cybersecurity industry, government, and consumer protection initiatives. The successful commercialization of the proposed technology will help shift the cybersecurity paradigm from reactive damage control to proactive prevention, reducing fraud-related expenditures, enhancing consumer confidence, and providing a critical layer of protection against the fastest-growing form of cybercrime. This Small Business Innovation Research (SBIR) Phase I project addresses the growing threat of social engineering cyber-attacks, which exploit human vulnerabilities rather than technological weaknesses to commit fraud, conduct espionage, and manipulate organizations. Traditional cybersecurity measures struggle to detect and mitigate these attacks due to their conversational and psychological nature, leaving individuals, businesses, and government agencies at risk. The opportunity lies in developing an automated, scalable intelligence-gathering system capable of infiltrating and mapping cybercriminal networks in real time. This project proposes a novel approach using interactive artificial intelligence (AI) chatbot investigators, powered by large language models (LLMs), to engage with social engineering scammers and trace their tactics, techniques, and procedures across platforms. By simulating potential victims, these chatbot investigators will extract structured intelligence from attackers while maintaining consistency over extended time periods. Key research objectives include developing novel natural language processing techniques to create an AI agent that can autonomously engage with social engineering threats and a system capable of deploying chatbot networks across a wide variety of communication surfaces for large-scale threat intelligence. The anticipated results include a robust, scalable system for cyber threat mapping, significantly improving the ability to detect, analyze, and counteract social engineering scams at scale. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Focus Areas
Eligibility
How to Apply
Up to $305K
2026-09-30
One-time $749 fee · Includes AI drafting + templates + PDF export
AI Requirement Analysis
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.