Smaller, Faster, Greener: Compressing Pre-trained Code Models via Surrogate-Assisted Optimization

CoRR(2023)

引用 0|浏览10
暂无评分
摘要
Large pre-trained models of code have been adopted to tackle many software engineering tasks and achieved excellent results. However, their large model size and expensive energy consumption prevent them from being widely deployed on developers' computers to provide real-time assistance. A recent study by Shi et al. can compress the pre-trained models into a small size. However, other important considerations in deploying models to have not been addressed: the model should have fast inference speed and minimal energy consumption. This requirement motivates us to propose Avatar, a novel approach that can reduce the model size as well as inference latency and energy consumption without compromising effectiveness (i.e., prediction accuracy). Avatar trains a surrogate model to predict the performance of a tiny model given only its hyperparameters setting. Moreover, Avatar designs a new fitness function embedding multiple key objectives, maximizing the predicted model accuracy and minimizing the model size, inference latency, and energy consumption. After finding the best model hyperparameters using a tailored genetic algorithm (GA), Avatar employs the knowledge distillation technique to train the tiny model. We evaluate Avatar and the baseline approach from Shi et al. on three datasets for two popular software engineering tasks: vulnerability prediction and clone detection. We use Avatar to compress models to a small size (3 MB), which is 160$\times$ smaller than the original pre-trained models. Compared with the original models, the inference latency of compressed models is significantly reduced on all three datasets. On average, our approach is capable of reducing the inference latency by 62$\times$, 53$\times$, and 186$\times$. In terms of energy consumption, compressed models only have 0.8 GFLOPs, which is 173$\times$ smaller than the original pre-trained models.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要