Tele-FLM Technical Report

Xiang Li,Yiqun Yao,Xin Jiang,Xuezhi Fang,Chao Wang, Xinzhang Liu,Zihan Wang, Yu Zhao, Xin Wang, Yuyao Huang,Shuangyong Song, Yongxiang Li,Zheng Zhang, Bo Zhao,Aixin Sun,Yequan Wang,Zhongjiang He, Zhongyuan Wang,Xuelong Li,Tiejun Huang

arxiv(2024)

引用 0|浏览71
暂无评分
摘要
Large language models (LLMs) have showcased profound capabilities in language understanding and generation, facilitating a wide array of applications. However, there is a notable paucity of detailed, open-sourced methodologies on efficiently scaling LLMs beyond 50 billion parameters with minimum trial-and-error cost and computational resources. In this report, we introduce Tele-FLM (aka FLM-2), a 52B open-sourced multilingual large language model that features a stable, efficient pre-training paradigm and enhanced factual judgment capabilities. Tele-FLM demonstrates superior multilingual language modeling abilities, measured by BPB on textual corpus. Besides, in both English and Chinese foundation model evaluation, it is comparable to strong open-sourced models that involve larger pre-training FLOPs, such as Llama2-70B and DeepSeek-67B. In addition to the model weights, we share the core designs, engineering practices, and training details, which we expect to benefit both the academic and industrial communities.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要