Skip to main content

CAREER: Algorithm-Hardware Co-design of Efficient Large Graph Machine Learning for Electronic Design Automation

NSF

open

About This Grant

Estimating Power, Performance, and Area (PPA) earlier in the electronic design automation (EDA) flow would improve the Quality of Results (QoR) and reliability in chip design. The classical analytical or heuristic methods can be challenging to fine-tune, especially for complex problems. Machine learning (ML) methods have proven to be effective in addressing these problems. Graph Neural Networks (GNNs) have gained popularity since they are among the most natural ways to represent the fundamental objects in the EDA flow. However, with increased design complexity and chip capacity, an increasing performance gap exists between the extremely large graphs in EDA and the insufficient support from general-purpose hardware, such as mainstream graphics processing units (GPUs). This project aims to expedite the large graph machine learning on various EDA tasks, through a full-fledged development of efficient and scalable computing paradigms. This project's novelties are EDA domain knowledge-aware graph machine learning, training acceleration, and algorithm-hardware co-design and optimization. The project's broader significance and importance include: (1) to advance the field of machine learning in chip design, highlighted in National Artificial Intelligence Initiative; (2) to deepen the understanding of interactions among EDA domain knowledge, graph learning, and GPU acceleration; (3) to enrich the computer engineering curriculum and promote participation from undergraduates, underrepresented groups, and K-12 students in STEM fields through relevant programs. The project will develop a design paradigm for efficient, scalable and practical algorithm-hardware co-optimized solutions to significantly accelerate large graph machine learning on EDA tasks using a single GPU. This project consists of three coherent research thrusts: (1) to develop an algorithm-hardware co-optimized paradigm, focusing on restudying EDA graph features, introducing partitioning and selective re-growth methods, and tailoring GPU kernels for unified graph machine learning on EDA tasks using a single GPU; (2) to speed up single GPU for large circuit Graph Neural Network (GNN) training by implementing a tiled reversible architecture for low-memory training, and designing a maxK nonlinearity function to reduce computation costs; (3) to jointly integrate EDA domain knowledge, graph learning, and hardware optimizations to co-search for the appropriate hardware primitives and GNN compression strategies, as well as closely leverage the unique properties of circuit graphs. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Focus Areas

machine learningengineering

Eligibility

universitynonprofitsmall business

How to Apply

Funding Range

Up to $214K

Deadline

2029-04-30

Complexity
Medium
Start Application

One-time $749 fee · Includes AI drafting + templates + PDF export

AI Requirement Analysis

Detailed requirements not yet analyzed

Have the NOFO? Paste it below for AI-powered requirement analysis.

0 characters (min 50)