NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
NSF
Large language models (LLMs) have transformed how people interact with artificial intelligence (AI) systems, achieving state-of-the-art results in various tasks such as education and healthcare. Their ability to understand and generate human-like text has opened up new possibilities for advancing scientific research, and one of the most promising applications of LLMs in this context is research idea generation, where LLMs can be utilized to identify novel research directions by analyzing existing knowledge. Although LLMs may generate claims that are not fully supported by existing publications, it is possible for them to generate potentially innovative research ideas that might have been overlooked by scientists working alone. Although quite a few studies have yielded promising results, significant advances are required for LLMs to generate accurate, easily verifiable, and trustworthy responses for scientific research. Moreover, for foundation models to achieve a comparable impact in research ideation, it is crucial that they are capable of optimizing both external and internal knowledge sources. Addressing this need, this project develops approaches that aim to effectively optimize two types of knowledge: external knowledge, drawn from diverse data sources, and internal knowledge, the parametric understanding acquired during training. A dual framework solution is designed for this optimization using agentic AI reasoning techniques. This research can improve the adaptation of external and internal knowledge of foundation models and their utility for scientific tasks. To optimize knowledge utilization in foundation models to foster research ideation, this project conducts three innovative research tasks: (1) The project develops an adversary-based reasoning approach to effectively harnessing the vast parametric knowledge within LLMs to improve research ideation. The project introduces adversarial learning in inference time, a paradigm shift from prompt design that unlocks the full potential of parametric knowledge of LLMs without requiring additional training. (2) This project develops a reinforcement learning-based approach to optimize LLM’s idea generation capabilities by leveraging external knowledge to systematically improve the utilization of external knowledge for idea generation. The project formulates the interaction between language agents and the external knowledge bases as a nested Markov Decision Process (MDP), where the outer MDP governs high-level action generation through interactions with the information retrieval environment, while the inner MDP controls token generation within LLMs. (3) This project develops a knowledge-based hallucination detection framework that assesses the groundedness of the generated research ideas and identifies hallucinated claims by analyzing the rationale behind the idea generation. The project also designs metrics to assess whether the ideation approach improves the quality of research ideas generated in terms of novelty and feasibility, and conducts extensive experimental studies to evaluate how the ideation approaches work using a variety of existing LLMs to generate research ideas based on the given approaches and evaluation metrics. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Up to $300K
2028-07-31
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.
One-time $749 fee · Includes AI drafting + templates + PDF export