Retrieving Examples from Memory for Retrieval Augmented Neural Machine Translation: A Systematic Comparison
arxiv(2024)
摘要
Retrieval-Augmented Neural Machine Translation (RAMT) architectures retrieve
examples from memory to guide the generation process. While most works in this
trend explore new ways to exploit the retrieved examples, the upstream
retrieval step is mostly unexplored. In this paper, we study the effect of
varying retrieval methods for several translation architectures, to better
understand the interplay between these two processes. We conduct experiments in
two language pairs in a multi-domain setting and consider several downstream
architectures based on a standard autoregressive model, an edit-based model,
and a large language model with in-context learning. Our experiments show that
the choice of the retrieval technique impacts the translation scores, with
variance across architectures. We also discuss the effects of increasing the
number and diversity of examples, which are mostly positive across the board.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要