Universality and approximation bounds for echo state networks with random weights
arxiv(2022)
摘要
We study the uniform approximation of echo state networks with randomly
generated internal weights. These models, in which only the readout weights are
optimized during training, have made empirical success in learning dynamical
systems. Recent results showed that echo state networks with ReLU activation
are universal. In this paper, we give an alternative construction and prove
that the universality holds for general activation functions. Specifically, our
main result shows that, under certain condition on the activation function,
there exists a sampling procedure for the internal weights so that the echo
state network can approximate any continuous casual time-invariant operators
with high probability. In particular, for ReLU activation, we give explicit
construction for these sampling procedures. We also quantify the approximation
error of the constructed ReLU echo state networks for sufficiently regular
operators.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要