Low-shot Learning in Natural Language Processing

2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI)(2020)

引用 11|浏览373
暂无评分
摘要
This paper study the low-shot learning paradigm in Natural Language Processing (NLP), which aims to provide the ability that can adapt to new tasks or new domains with limited annotation data, like zero or few labeled examples. Specifically, Low-shot learning unifies the zero-shot and few-shot learning paradigm. Diverse low-shot learning approaches, including capsule-based networks, data-augmentation methods, and memory networks, are discussed for different NLP tasks, for example, intent detection and named entity typing. We also provide potential future directions for low-shot learning in NLP.
更多
查看译文
关键词
Zero-shot Learning,Few-shot Learning,Natural Language Processing,Intent Detection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要