Positional Encodings for Light Curve Transformers: Playing with Positions and Attention

arXiv (Cornell University)(2023)

引用 0|浏览13
暂无评分
摘要
We conducted empirical experiments to assess the transferability of a light curve transformer to datasets with different cadences and magnitude distributions using various positional encodings (PEs). We proposed a new approach to incorporate the temporal information directly to the output of the last attention layer. Our results indicated that using trainable PEs lead to significant improvements in the transformer performances and training times. Our proposed PE on attention can be trained faster than the traditional non-trainable PE transformer while achieving competitive results when transfered to other datasets.
更多
查看译文
关键词
light curve transformers,positional encodings,positions
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要