# Credal Learning Theory

arxiv（2024）

摘要

Statistical learning theory is the foundation of machine learning, providing
theoretical bounds for the risk of models learnt from a (single) training set,
assumed to issue from an unknown probability distribution. In actual
deployment, however, the data distribution may (and often does) vary, causing
domain adaptation/generalization issues. In this paper we lay the foundations
for a `credal' theory of learning, using convex sets of probabilities (credal
sets) to model the variability in the data-generating distribution. Such credal
sets, we argue, may be inferred from a finite sample of training sets. Bounds
are derived for the case of finite hypotheses spaces (both assuming
realizability or not) as well as infinite model spaces, which directly
generalize classical results.

更多查看译文

AI 理解论文

溯源树

样例

生成溯源树，研究论文发展脉络

Chat Paper

正在生成论文摘要