Decomposed Meta-Learning for Few-Shot Sequence Labeling

IEEE/ACM Transactions on Audio, Speech, and Language Processing(2024)

引用 0|浏览4
暂无评分
摘要
Few-shot sequence labeling is a general problem formulation for many natural language understanding tasks in data-scarcity scenarios, which require models to generalize to new types via only a few labeled examples. Recent advances mostly adopt metric-based meta-learning and thus face the challenges of modeling the miscellaneous Other prototype and the inability to generalize to classes with large domain gaps. To overcome these challenges, we propose a decomposed meta-learning framework for few-shot sequence labeling that breaks down the task into few-shot mention detection and few-shot type classification, and sequentially tackles them via meta-learning. Specifically, we employ model-agnostic meta-learning (MAML) to prompt the mention detection model to learn boundary knowledge shared across types. With the detected mention spans, we further leverage the MAML-enhanced span-level prototypical network for few-shot type classification. In this way, the decomposition framework bypasses the requirement of modeling the miscellaneous Other prototype. Meanwhile, the adoption of the MAML algorithm enables us to explore the knowledge contained in support examples more efficiently, so that our model can quickly adapt to new types using only a few labeled examples. Under our framework, we explore a basic implementation that uses two separate models for the two subtasks. We further propose a joint model to reduce model size and inference time, making our framework more applicable for scenarios with limited resources. Extensive experiments on nine benchmark datasets, including named entity recognition, slot tagging, event detection, and part-of-speech tagging, show that the proposed approach achieves start-of-the-art performance across various few-shot sequence labeling tasks.
更多
查看译文
关键词
few-shot sequence labeling,task decomposition,meta-learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要