Unlocking Parameter-Efficient Fine-Tuning for Low-Resource Language Translation
arxiv(2024)
摘要
Parameter-efficient fine-tuning (PEFT) methods are increasingly vital in
adapting large-scale pre-trained language models for diverse tasks, offering a
balance between adaptability and computational efficiency. They are important
in Low-Resource Language (LRL) Neural Machine Translation (NMT) to enhance
translation accuracy with minimal resources. However, their practical
effectiveness varies significantly across different languages. We conducted
comprehensive empirical experiments with varying LRL domains and sizes to
evaluate the performance of 8 PEFT methods with in total of 15 architectures
using the SacreBLEU score. We showed that 6 PEFT architectures outperform the
baseline for both in-domain and out-domain tests and the Houlsby+Inversion
adapter has the best performance overall, proving the effectiveness of PEFT
methods.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要