We propose a framework to encode the geometric structure of the special Euclidean motion group SE(2) in convolutional networks to yield translation and rotation equivariance via the introduction of SE(2)-group convolution layers
We demonstrate that the Neural Equivariant Interatomic Potential, a new type of graph neural network built on SE(3)-equivariant convolutions exhibits state-of-the-art accuracy and exceptional data efficiency on data sets of small molecules and periodic materials
Instead of the data augmentation approach used for vanilla Convolutional neural networks, to ‘train’ the symmetry into the model, group equivariant CNNs instead have a built in symmetry that leads to improvements in performance and data efficiency
We provide sufficient conditions for universality of rotation equivariant point cloud networks and use these conditions to show that current models are universal as well as for devising new universal architectures.
We prove these properties mathematically and demonstrate them numerically by training a Euclidean symmetry equivariant neural network to learn symmetry breaking input to deform a square into a rectangle
The network is applied to 3D object classification, retrieval, and alignment, but has potential applications in spherical images such as panoramas, or any data that can be represented as a spherical function
We observe that the exponential can be computed implicitly. Using this we developed new invertible transformations named convolution exponentials and graph convolution exponentials, and showed that they retain their equivariance properties under exponentiation