Learning a Fourier Transform for Linear Relative Positional Encodings in Transformers
International Conference on Artificial Intelligence and Statistics(2023)
Abstract
We propose a new class of linear Transformers called
FourierLearner-Transformers (FLTs), which incorporate a wide range of relative
positional encoding mechanisms (RPEs). These include regular RPE techniques
applied for sequential data, as well as novel RPEs operating on geometric data
embedded in higher-dimensional Euclidean spaces. FLTs construct the optimal RPE
mechanism implicitly by learning its spectral representation. As opposed to
other architectures combining efficient low-rank linear attention with RPEs,
FLTs remain practical in terms of their memory usage and do not require
additional assumptions about the structure of the RPE mask. Besides, FLTs allow
for applying certain structural inductive bias techniques to specify masking
strategies, e.g. they provide a way to learn the so-called local RPEs
introduced in this paper and give accuracy gains as compared with several other
linear Transformers for language modeling. We also thoroughly test FLTs on
other data modalities and tasks, such as image classification, 3D molecular
modeling, and learnable optimizers. To the best of our knowledge, for 3D
molecular data, FLTs are the first Transformer architectures providing linear
attention and incorporating RPE masking.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined