Large Multimodal Model Compression via Efficient Pruning and Distillation at AntGroup
arxiv(2023)
Abstract
The deployment of Large Multimodal Models (LMMs) within AntGroup has
significantly advanced multimodal tasks in payment, security, and advertising,
notably enhancing advertisement audition tasks in Alipay. However, the
deployment of such sizable models introduces challenges, particularly in
increased latency and carbon emissions, which are antithetical to the ideals of
Green AI. This paper introduces a novel multi-stage compression strategy for
our proprietary LLM, AntGMM. Our methodology pivots on three main aspects:
employing small training sample sizes, addressing multi-level redundancy
through multi-stage pruning, and introducing an advanced distillation loss
design. In our research, we constructed a dataset, the Multimodal Advertisement
Audition Dataset (MAAD), from real-world scenarios within Alipay, and conducted
experiments to validate the reliability of our proposed strategy. Furthermore,
the effectiveness of our strategy is evident in its operational success in
Alipay's real-world multimodal advertisement audition for three months from
September 2023. Notably, our approach achieved a substantial reduction in
latency, decreasing it from 700ms to 90ms, while maintaining online performance
with only a slight performance decrease. Moreover, our compressed model is
estimated to reduce electricity consumption by approximately 75 million kWh
annually compared to the direct deployment of AntGMM, demonstrating our
commitment to green AI initiatives. We will publicly release our code and the
MAAD dataset after some
reviews[https://github.com/MorinW/AntGMM_Pruning].
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined