Metric Learning for Multi-output Tasks.

IEEE Trans. Pattern Anal. Mach. Intell.(2019)

引用 113|浏览59
暂无评分
摘要
Multi-output learning with the task of simultaneously predicting multiple outputs for an input has increasingly attracted interest from researchers due to its wide application. The k nearest neighbor (kNN) algorithm is one of the most popular frameworks for handling multi-output problems. The performance of kNN depends crucially on the metric used to compute the distance between different instances. However, our experiment results show that the existing advanced metric learning technique cannot provide an appropriate distance metric for multi-output tasks. This paper systematically studies how to learn an appropriate distance metric for multi-output problems. In particular, we present a novel large margin metric learning paradigm for multi-output tasks, which projects both the input and output into the same embedding space and then learns a distance metric to discover output dependency such that instances with very different multiple outputs will be moved far away. Several strategies are then proposed to speed up the training and testing time. Moreover, we study the generalization error bound of our method, which shows that our method is able to tighten the excess risk bounds. Experiments on three multi-output learning tasks (multi-label classification, multi-target regression, and multi-concept retrieval) validate the effectiveness and scalability of the proposed method.
更多
查看译文
关键词
Measurement,Training,Decoding,Testing,Semantics,Principal component analysis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要