NSF requires disclosure of AI tool usage in proposal preparation. Ensure you disclose the use of FindGrants' AI drafting in your application.
NSF
Medical image segmentation is essential for clinical decision making and disease monitoring, yet current deep-learning approaches are limited by their reliance on imaging data alone and lack of contextual understanding. Vision-language models (VLMs) offer a promising alternative by generating textual annotations, but their dependence on manually crafted prompts and poor adaptation to segmentation tasks constrain their clinical utility. Moreover, these models struggle to generalize across imaging modalities and anatomical regions. This project develops an automated framework that generates segmentation-specific textual descriptions without human-created prompts, improving annotation efficiency and segmentation quality. It further integrates dynamic knowledge-graph reasoning to embed evolving medical expertise into the annotation process, enhancing adaptability across diverse imaging contexts. The approach aims to create robust and generalizable artificial intelligence (AI) tools for real-world clinical use. Broader-impact aspects of the project include the engagement of students through hands-on research, interdisciplinary collaboration, and open-access tools that advance science and education as well as clinical relevance that is ensured through close collaboration with medical experts, allowing the research to address real-world healthcare needs and support translational impact. The project introduces a multimodal framework that leverages the bidirectional relationship between images and text to refine segmentation. Key components include: (1) an auto-prompting mechanism driven by graph-based reasoning to produce task-specific textual descriptions; (2) a knowledge-graph module that encodes and updates domain expertise to improve generalization; (3) a multi-level feature-alignment strategy with asynchronous fusion and bidirectional encoding to enhance multimodal learning; and (4) a closed-loop learning paradigm wherein segmentation and annotation mutually refine each other. Together, these components establish a comprehensive system for automated and adaptive medical image segmentation with high clinical relevance. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Up to $210K
2028-09-30
Detailed requirements not yet analyzed
Have the NOFO? Paste it below for AI-powered requirement analysis.
One-time $49 fee · Includes AI drafting + templates + PDF export
New York Systems Change and Inclusive Opportunities Network (NY SCION)
Labor — up to $310000020251M
Trade Adjustment Assistance (TAA)
Labor — up to $2779372424.6M
Occupational Safety & Health - Training & Education (OSH T&E)
Labor — up to $590000020.3M
The Charter School Revolving Loan Fund Program
State Treasurer's Office — up to $100000.3M
The Charter School Revolving Loan Fund Program
State Treasurer's Office — up to $100000.3M
CEFA Bond Financing Program
State Treasurer's Office — up to $15000M