The Minimum Error Minimax Probability Machine

Journal of Machine Learning Research(2004)

引用 133|浏览98
暂无评分
摘要
We construct a distribution-free Bayes optimal classifier called the Minimum Error Minimax Probability Machine (MEMPM) in a worst-case setting, i.e., under all possible choices of class-conditional densities with a given mean and covariance matrix. By assuming no specific distributions for the data, our model is thus distinguished from traditional Bayes optimal approaches, where an assumption on the data distribution is a must. This model is extended from the Minimax Probability Machine (MPM), a recently-proposed novel classifier, and is demonstrated to be the general case of MPM. Moreover, it includes another special case named the Biased Minimax Probability Machine, which is appropriate for handling biased classification. One appealing feature of MEMPM is that it contains an explicit performance indicator, i.e., a lower bound on the worst-case accuracy, which is shown to be tighter than that of MPM. We provide conditions under which the worst-case Bayes optimal classifier converges to the Bayes optimal classifier. We demonstrate how to apply a more general statistical framework to estimate model input parameters robustly. We also show how to extend our model to nonlinear classification by exploiting kernelization techniques. A series of experiments on both synthetic data sets and real world benchmark data sets validates our proposition and demonstrates the effectiveness of our model.
更多
查看译文
关键词
distribution-free,model input parameter,optimal approach,synthetic data set,kernel,data distribution,classifier converges,distribution-free bayes,bayes optimal classifier,real world benchmark data,sequential biased minimax probability machine,minimum error,recently-proposed novel classifier,worst-case accuracies,classification,minimum error minimax probability,optimal classifier,lower bound,covariance matrix,performance indicator,synthetic data
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要