Theoretical Comparisons of Learning from Positive-Negative, Positive-Unlabeled, and Negative-Unlabeled Data.

arXiv: Learning(2016)

引用 23|浏览59
暂无评分
摘要
In PU learning, a binary classifier is trained only from positive (P) and unlabeled (U) data without negative (N) data. Although N data is missing, it sometimes outperforms PN learning (i.e., supervised learning) in experiments. In this paper, we theoretically compare PU (and the opposite NU) learning against PN learning, and prove that, one of PU and NU learning given infinite U data will almost always improve on PN learning. Our theoretical finding is also validated experimentally.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要