Shallow ReLU neural networks and finite elements
CoRR(2024)
摘要
We point out that (continuous or discontinuous) piecewise linear functions on
a convex polytope mesh can be represented by two-hidden-layer ReLU neural
networks in a weak sense. In addition, the numbers of neurons of the two hidden
layers required to weakly represent are accurately given based on the numbers
of polytopes and hyperplanes involved in this mesh. The results naturally hold
for constant and linear finite element functions. Such weak representation
establishes a bridge between shallow ReLU neural networks and finite element
functions, and leads to a perspective for analyzing approximation capability of
ReLU neural networks in L^p norm via finite element functions. Moreover, we
discuss the strict representation for tensor finite element functions via the
recent tensor neural networks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要