Groupwise Ranking Loss for Multi-Label Learning

IEEE Access(2020)

引用 4|浏览141
暂无评分
摘要
This work studies multi-label learning (MLL), where each instance is associated with a subset of positive labels. For each instance, a good multi-label predictor should encourage the predicted positive labels to be close to its ground-truth positive ones. In this work, we propose a new loss, named Groupwise Ranking LosS (GRLS) for multi-label learning. Minimizing GRLS encourages the predicted relevancy scores of the ground-truth positive labels to be higher than that of the negative ones. More importantly, its time complexity is linear with respect to the number of candidate labels, rather than square complexity for some pairwise ranking based methods. We further analyze GRLS in the perspective of label-wise margin and suggest that multi-label predictor is label-wise effective if and only if GRLS is optimal. We also analyze the relations between GRLS and some widely used loss functions for MLL. Finally, we apply GRLS to multi-label learning, and extensive experiments on several benchmark multi-label databases demonstrate the competitive performance of the proposed method to state-of-the-art methods.
更多
查看译文
关键词
Multi-label learning,groupwise ranking,optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要