谷歌浏览器插件
订阅小程序
在清言上使用

EPiDA: an Easy Plug-in Data Augmentation Framework for High Performance Text Classification

NAACL 2022 THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS HUMAN LANGUAGE TECHNOLOGIES(2022)

引用 4|浏览2
暂无评分
摘要
Recent works have empirically shown the effectiveness of data augmentation (DA) for NLP tasks, especially for those suffering from data scarcity. Intuitively, given the size of generated data, their diversity and quality are crucial to the performance of targeted tasks. However, to the best of our knowledge, most existing methods consider only either the diversity or the quality of augmented data, thus cannot fully tap the potential of DA for NLP. In this paper, we present an easy and plug-in data augmentation framework EPiDA to support effective text classification. EPiDA employs two mechanisms: relative entropy maximization (REM) and conditional entropy minimization (CEM) to control data generation, where REM is designed to enhance the diversity of augmented data while CEM is exploited to ensure their semantic consistency. EPiDA can support efficient and continuous data generation for effective classifier training. Extensive experiments show that EPiDA outperforms existing SOTA methods in most cases, though not using any agent network or pre-trained generation network, and it works well with various DA algorithms and classification models. Code is available at https://github.com/zhaominyiz/EPiDA.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要