Gated Neural Network With Regularized Loss For Multi-Label Text Classification

2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)(2019)

引用 6|浏览297
暂无评分
摘要
Multi-label text classification is generally more difficult for its exponential output label space and more flexible input document considering its corresponding number of labels. We observed that some long documents have only one or two labels while some short documents are related to much more labels. In this paper, we propose a dynamic Gated Neural Network architecture to simultaneously process the document through two parts, one to extract the most informative semantics and filter redundant information, the other to capture context semantics with most of the information in the document kept. And semantics from these two parts are dynamically controlled by a gate to perform subsequent classification. And to better the training we incorporate label dependencies into traditional binary cross-entropy loss by exploiting label co-occurrences. Experimental results on AAPD and RCV1-V2 datasets show that our proposed methods achieve state-of-art performance. Further analysis of experimental results demonstrate that the proposed methods not only capture enough feature information from both long and short documents assigned with various labels, but also exploit label dependency to regularize the proposed model to further improve its performance.
更多
查看译文
关键词
Multi-label, Dynamic Gate, Nerual Networks, Label Co-occurrences
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要