Fast selection of compiler optimizations using performance prediction with graph neural networks

CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE(2023)

引用 0|浏览11
暂无评分
摘要
Tuning application performance on modern computing infrastructures involves choices in a vast design space as modern computing architectures can have several complex structures impacting performance. Moreover, different applications use these structures in different ways, leading to a challenging performance function. Consequently, it is hard for compilers or experts to find optimal compilation parameters for an application that maximizes such performance function. One approach to tackle this problem is to evaluate many possible optimization plans and select the best among them. However, executing an application to measure its performance for every plan can be very expensive. To tackle this problem, previous work has investigated the use of Machine Learning techniques to predict the performance of the applications without executing them quickly. In this work, we evaluate the use of graph neural networks (GNN) to make fast predictions without executing the application to guide the selection of good optimization sequences. We propose a GNN architecture to make such predictions. We train and test it using 30 thousand different compilation plans applied to 300 different applications, using ARM64 and LLVM IR code representations as input. Our results indicate that the control and data flow graph can then learn features from the control and data flow graph to outperform nongraph-aware Machine Learning models. Our GNN architecture achieved 91% accuracy in our dataset compared to 79% when using a nongraph-aware architecture-taking only 16ms to predict a given input. If the application been optimized took an average of 10 s to execute, and we evaluated 1000 optimization sequences, it would take almost 9 h to assess all pairs, but only 16 s with our GNN .
更多
查看译文
关键词
compiler auto-tuning, graph neural networks, optimizer compilers
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要