谷歌浏览器插件
订阅小程序
在清言上使用

(Machine) learning parameter regions

Journal of Econometrics(2021)

引用 1|浏览6
暂无评分
摘要
How many random points from an identified set, a confidence set, or a highest posterior density set suffice to describe them? This paper argues that taking random draws from a parameter region in order to approximate its shape is a supervised learning problem (analogous to sampling pixels of an image to recognize it). Misclassification error - a common criterion in machine learning - provides an off-the-shelf tool to assess the quality of a given approximation. We say a parameter region can be learned if there is an algorithm that yields a misclassification error of at most epsilon with probability at least 1 - delta, regardless of the sampling distribution. We show that learning a parameter region is possible if and only if its potential shapes are not too complex. Moreover, the tightest band that contains a d-dimensional parameter region is always learnable from the inside (in a sense we make precise), with at least max {(1 - epsilon) ln (1/delta), (3/16)d}/epsilon draws, but at most min{2d ln(2d/delta), exp(1)(2d+ln(1/delta))}/epsilon. These bounds grow linearly in the dimension of the parameter region, and are uniform with respect to its true shape. We illustrate the usefulness of our results using structural vector autoregressions. We show how many orthogonal matrices are necessary/sufficient to evaluate the impulse responses' identified set and how many 'shotgun plots' to report when conducting joint inference on impulse responses. (C) 2020 Elsevier B.V. All rights reserved.
更多
查看译文
关键词
Machine learning,Supervised learning,Set-identified models,Structural vector autoregressions
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要