AdapterDistillation: Non-Destructive Task Composition with Knowledge Distillation
EMNLP 2023(2023)
摘要
Leveraging knowledge from multiple tasks through introducing a small number
of task specific parameters into each transformer layer, also known as
adapters, receives much attention recently. However, adding an extra fusion
layer to implement knowledge composition not only increases the inference time
but also is non-scalable for some applications. To avoid these issues, we
propose a two-stage knowledge distillation algorithm called
AdapterDistillation. In the first stage, we extract task specific knowledge by
using local data to train a student adapter. In the second stage, we distill
the knowledge from the existing teacher adapters into the student adapter to
help its inference. Extensive experiments on frequently asked question
retrieval in task-oriented dialog systems validate the efficiency of
AdapterDistillation. We show that AdapterDistillation outperforms existing
algorithms in terms of accuracy, resource consumption and inference time.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要