Gain-Some-Lose-Some: Reliable Quantification Under General Dataset Shift

2021 21ST IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2021)(2021)

引用 9|浏览2
暂无评分
摘要
When applying supervised learning to estimate class distributions of unlabelled samples (so-called quantification), dataset shift is an expected yet challenging problem. Existing quantification methods make strong assumptions on the nature of dataset shift that often will not hold in practice. We propose a novel Gain-Some-Lose-Some (GSLS) model that accounts for more general conditions of dataset shift. We present a method for fitting the GSLS model without any labelled instances from the target sample, and experimentally demonstrate that GSLS can produce reliable quantification prediction intervals under broader conditions of shift than existing quantification methods.
更多
查看译文
关键词
quantification, dataset shift, prediction intervals
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要