Evolving Knowledge Distillation with Large Language Models and Active Learning
CoRR(2024)
Abstract
Large language models (LLMs) have demonstrated remarkable capabilities across
various NLP tasks. However, their computational costs are prohibitively high.
To address this issue, previous research has attempted to distill the knowledge
of LLMs into smaller models by generating annotated data. Nonetheless, these
works have mainly focused on the direct use of LLMs for text generation and
labeling, without fully exploring their potential to comprehend the target task
and acquire valuable knowledge. In this paper, we propose EvoKD: Evolving
Knowledge Distillation, which leverages the concept of active learning to
interactively enhance the process of data generation using large language
models, simultaneously improving the task capabilities of small domain model
(student model). Different from previous work, we actively analyze the student
model's weaknesses, and then synthesize labeled samples based on the analysis.
In addition, we provide iterative feedback to the LLMs regarding the student
model's performance to continuously construct diversified and challenging
samples. Experiments and analysis on different NLP tasks, namely, text
classification and named entity recognition show the effectiveness of EvoKD.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined