Skip to yearly menu bar Skip to main content


Poster

DeepGate4: Efficient and Effective Representation Learning for Circuit Design at Scale

Ziyang Zheng · Shan Huang · Jianyuan Zhong · Zhengyuan Shi · Guohao Dai · Ningyi Xu · Qiang Xu

Hall 3 + Hall 2B #191
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Circuit representation learning has become pivotal in electronic design automation, enabling critical tasks such as testability analysis, logic reasoning, power estimation, and SAT solving. However, existing models face significant challenges in scaling to large circuits due to limitations like over-squashing in graph neural networks and the quadratic complexity of transformer-based models. To address these issues, we introduce \textbf{DeepGate4}, a scalable and efficient graph transformer specifically designed for large-scale circuits. DeepGate4 incorporates several key innovations: (1) an update strategy tailored for circuit graphs, which reduce memory complexity to sub-linear and is adaptable to any graph transformer; (2) a GAT-based sparse transformer with global and local structural encodings for AIGs; and (3) an inference acceleration CUDA kernel that fully exploit the unique sparsity patterns of AIGs. Our extensive experiments on the ITC99 and EPFL benchmarks show that DeepGate4 significantly surpasses state-of-the-art methods, achieving 15.5\% and 31.1\% performance improvements over the next-best models. Furthermore, the Fused-DeepGate4 variant reduces runtime by 35.1\% and memory usage by 46.8\%, making it highly efficient for large-scale circuit analysis. These results demonstrate the potential of DeepGate4 to handle complex EDA tasks while offering superior scalability and efficiency.

Live content is unavailable. Log in and register to view live content