Chrome Extension
WeChat Mini Program
Use on ChatGLM

LLM-Aided Compilation for Tensor Accelerators

Charles Hong, Sahil Bhatia, Altan Haan, Shengjun Kris Dong, Dima Nikiforov,Alvin Cheung,Yakun Sophia Shao

CoRR(2024)

Cited 0|Views68
No score
Abstract
Hardware accelerators, in particular accelerators for tensor processing, have many potential application domains. However, they currently lack the software infrastructure to support the majority of domains outside of deep learning. A compiler that can easily be updated to reflect changes in both application and hardware would provide great benefits to the agile development of hardware accelerators. In this work, we discuss how large language models (LLMs) could be leveraged to build such a compiler. Specifically, we demonstrate the ability of GPT-4 to achieve high pass rates in translating code to the Gemmini accelerator, and prototype a technique for decomposing translation into smaller, more LLM-friendly steps. Additionally, we propose a 2-phase workflow for utilizing LLMs to generate hardware-optimized code.
More
Translated text
Key words
Deep Learning,Pass Rate,Hardware Accelerators,Agile Development,Deep Neural Network,Natural Language,Search Space,Matrix Multiplication,Load Data,Cost Model,Model Predictive Control,Target Language,Code Generation,Instruction Set Architecture,Optimal Code,Linear Quadratic Regulator,Training Corpus,Domain-specific Languages,Source Program,Input Code
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined