Chinese Tiny LLM: Pretraining a Chinese-Centric Large Language Model
CoRR(2024)
摘要
In this study, we introduce CT-LLM, a 2B large language model (LLM) that
illustrates a pivotal shift towards prioritizing the Chinese language in
developing LLMs. Uniquely initiated from scratch, CT-LLM diverges from the
conventional methodology by primarily incorporating Chinese textual data,
utilizing an extensive corpus of 1,200 billion tokens, including 800 billion
Chinese tokens, 300 billion English tokens, and 100 billion code tokens. This
strategic composition facilitates the model's exceptional proficiency in
understanding and processing Chinese, a capability further enhanced through
alignment techniques. Demonstrating remarkable performance on the CHC-Bench,
CT-LLM excels in Chinese language tasks, and showcases its adeptness in English
through SFT. This research challenges the prevailing paradigm of training LLMs
predominantly on English corpora and then adapting them to other languages,
broadening the horizons for LLM training methodologies. By open-sourcing the
full process of training a Chinese LLM, including a detailed data processing
procedure with the obtained Massive Appropriate Pretraining Chinese Corpus
(MAP-CC), a well-chosen multidisciplinary Chinese Hard Case Benchmark
(CHC-Bench), and the 2B-size Chinese Tiny LLM (CT-LLM), we aim to foster
further exploration and innovation in both academia and industry, paving the
way for more inclusive and versatile language models.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要