F or large nlp model

semanticscholar(2021)

引用 0|浏览1
暂无评分
摘要
Recently, large-scale transformer-based models have been proven to be effective over various tasks across many domains. Nevertheless, putting them into production is very expensive, requiring comprehensive optimization techniques to reduce inference costs. This paper introduces a series of transformer inference optimization techniques at the algorithm and implementation levels. These techniques include a pre-padding decoding mechanism that improves token parallelism for generation and highly optimized kernels designed for very long inputs and large hidden sizes. On this basis, we propose a transformer inference acceleration library – Easy and Efficient Transformer (EET), which has a significant performance improvement over existing libraries. Compared to Faster Transformer v4.0’s implementation for transformer decoder layer on A100, EET achieves an average 2-4.5x state-of-art speedup. EET is available at https://github.com/NetEase-FuXi/EET. A demo video is available at https://youtu.be/22UPcNGcErg.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要