Chrome Extension
WeChat Mini Program
Use on ChatGLM

Prediction, Learning, Uniform Convergence, and Scale-Sensitive Dimensions.

PL BartlettTop Scholar,PM LongTop Scholar

Journal of Computer and System Sciences(1998)

Australian Natl Univ | Google

Cited 54|Views14
Abstract
We present a new general-purpose algorithm for learning classes of [0, 1] -valued functions in a generalization of the prediction model and prove a general upper bound on the expected absolute error of this algorithm in terms of a scale-sensitive generalization of the Vapnik dimension proposed by Alon, Ben-David, Cesa-Bianchi, and Haussler. We give lower bounds implying that our upper bounds cannot be improved by more than a constant factor in general. We apply this result, together with techniques due to Haussler and to Benedek and Itai, to obtain new upper bounds on packing numbers in terms of this scale-sensitive notion of dimension. Using a different technique, we obtain new bounds on packing numbers in terms of Kearns and Schapire's fat-shattering function. We show how to apply both packing bounds to obtain improved general bounds on the sample complexity of agnostic learning. For each epsilon > 0, we establish weaker sufficient and stronger necessary conditions for a class of [0, 1] -valued functions to be agnostically learnable to within epsilon and to be an E-uniform Glivenko-Cantelli class. (C) Academic Press.
More
Translated text
Key words
scale-sensitive dimension,uniform convergence
PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Try using models to generate summary,it takes about 60s
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:本文提出了一种新的通用学习算法,并在尺度敏感维度(scale-sensitive dimension)的基础上证明了该算法在预测模型泛化中的预期绝对误差上界,同时给出了新的关于agnostic学习样本复杂度的泛化边界。

方法】:作者利用尺度敏感维度,一种对Vapnik维度的推广,来证明学习算法的预期绝对误差上界,并基于此推导出新的包装数(packing numbers)上界。

实验】:本文未提供具体的实验设置或数据集名称,而是通过理论分析和推导来展示算法的预期性能和泛化边界,证明了对于每个ε > 0,存在较弱充分条件和较强必要条件,使得一个[0, 1]值函数类在ε误差内可agnostic学习,并且是E-uniform Glivenko-Cantelli类。