Large Language Model based Situational Dialogues for Second Language Learning
CoRR(2024)
摘要
In second language learning, scenario-based conversation practice is
important for language learners to achieve fluency in speaking, but students
often lack sufficient opportunities to practice their conversational skills
with qualified instructors or native speakers. To bridge this gap, we propose
situational dialogue models for students to engage in conversational practice.
Our situational dialogue models are fine-tuned on large language models (LLMs),
with the aim of combining the engaging nature of an open-ended conversation
with the focused practice of scenario-based tasks. Leveraging the
generalization capabilities of LLMs, we demonstrate that our situational
dialogue models perform effectively not only on training topics but also on
topics not encountered during training. This offers a promising solution to
support a wide range of conversational topics without extensive manual work.
Additionally, research in the field of dialogue systems still lacks reliable
automatic evaluation metrics, leading to human evaluation as the gold standard
(Smith et al., 2022), which is typically expensive. To address the limitations
of existing evaluation methods, we present a novel automatic evaluation method
that employs fine-tuned LLMs to efficiently and effectively assess the
performance of situational dialogue models.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要