On the Generalizability of Two-Layer ReLU-activated Neural Networks with a Fourier Feature Embedding

Nithin Raghavan, Richard Hu, Alberto L. Checcone

semanticscholar(2021)

引用 0|浏览2
暂无评分
摘要
We would like to analyze the generalizability bounds of binary coordinate-based multi-layer perceptrons (MLPs) with an input Fourier Feature encoding using Rademacher analysis. Coordinate-based MLPs are a form of artificial neural network (ANN) that take in as input a low-dimensional input (usually a 2D or 3D coordinate), and they can be increasingly found in many problems in computer graphics and computer vision, such as 3D shape regression, 2D image regression, CT, MRI and more. However, MLPs with a raw coordinate input usually do not perform well in practice, as they fail to learn high-frequency features. Recent work by Tancik, et al. [19] introduced the Fourier Feature embedding, which is a generalization of the sinusoidal positional encoding found in many papers [21, 13, 20] that maps an input coordinate v ∈ R onto the surface of a hypersphere as follows:
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要