AGD: A Learning-based Optimization Framework for EDA and its Application to Gate Sizing.

Phuoc Pham,Jaeyong Chung

DAC(2023)

引用 1|浏览0
暂无评分
摘要
In electronic design automation (EDA), most simulation models are not differentiable, and many design choices are discrete. As a result, greedy optimization methods based on numerical gradients have been used widely, although it suffers from suboptimal solutions. On the other hand, analytic methods may offer better solutions, but at the cost of enormous research efforts. Reinforcement learning (RL) has been leveraged to tackle this problem owing to its generality; however, RL also suffers from notorious sample inefficiency, which is exaggerated in EDA because data sampling in EDA is very expensive due to slow simulations. This paper proposes an alternative to RL for EDA, namely analytic gradient descent (AGD). Our method calculates analytic gradients of a design objective with respect to continuous and discrete design choices through a neural network learned by a simulation model. Then it performs a gradient descent procedure optimizing the design objective directly. We demonstrate AGD on the well-known gate sizing problem and show that our method can be very close to an industry-leading commercial tool in terms of design quality of result (QoR), while it only takes several person-months in comparison to dedicated efforts of human engineering over decades to develop. In addition, we also show that AGD can generalize to unseen circuits, with less training specific in a small amount of execution time.
更多
查看译文
关键词
gate sizing optimization, reinforcement learning, analytic gradient descent, learnable selectors
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要