谷歌浏览器插件
订阅小程序
在清言上使用

YUAN 2.0: A Large Language Model with Localized Filtering-based Attention.

Shaohua Wu,Xudong Zhao,Shenling Wang,Jiangang Luo, Lingjun Li,Xi Chen, Bing Zhao,Wei Wang,Tong Yu,Rongguo Zhang, Jiahua Zhang,Chao Wang

CoRR(2023)

引用 0|浏览19
暂无评分
摘要
In this work, the Localized Filtering-based Attention (LFA) is introduced to incorporate prior knowledge of local dependencies of natural language into Attention. Based on LFA, we develop and release Yuan 2.0, a large language model with parameters ranging from 2.1 billion to 102.6 billion. A data filtering and generation method is presented to build pretraining and fine-tuning dataset in high quality. A distributed training method with non-uniform pipeline parallel, data parallel, and optimizer parallel is proposed, which greatly reduces the bandwidth requirements of intra-node communication, and achieves good performance in large-scale distributed training. Yuan 2.0 models display impressive ability in code generation, math problem-solving, and chat compared with existing models. The latest version of YUAN 2.0, including model weights and source code, is accessible at Github.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要