With Greater Text Comes Greater Necessity: Inference-Time Training Helps Long Text Generation
CoRR(2024)
摘要
Long text generation, such as novel writing or discourse-level translation
with extremely long contexts, presents significant challenges to current
language models. Existing methods mainly focus on extending the model's context
window through strategies like length extrapolation. However, these approaches
demand substantial hardware resources during the training and/or inference
phases. Our proposed method, Temp-Lora, introduces an alternative concept.
Instead of relying on the KV cache to store all context information, Temp-Lora
embeds this information directly into the model's parameters. In the process of
long text generation, we use a temporary Lora module, progressively trained
with text generated previously. This approach not only efficiently preserves
contextual knowledge but also prevents any permanent alteration to the model's
parameters given that the module is discarded post-generation. Extensive
experiments on the PG19 language modeling benchmark and the GuoFeng
discourse-level translation benchmark validate the effectiveness of Temp-Lora.
Our results show that: 1) Temp-Lora substantially enhances generation quality
for long texts, as indicated by a 13.2
PG19, and a 29.6
score on GuoFeng, 2) Temp-Lora is compatible with and enhances most existing
long text generation methods, and 3) Temp-Lora can greatly reduce computational
costs by shortening the context window. While ensuring a slight improvement in
generation quality (a decrease of 3.8
in the FLOPs required for inference and a 51.5
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要