Sub-Adjacent Transformer: Improving Time Series Anomaly Detection with Reconstruction Error from Sub-Adjacent Neighborhoods
CoRR(2024)
摘要
In this paper, we present the Sub-Adjacent Transformer with a novel attention
mechanism for unsupervised time series anomaly detection. Unlike previous
approaches that rely on all the points within some neighborhood for time point
reconstruction, our method restricts the attention to regions not immediately
adjacent to the target points, termed sub-adjacent neighborhoods. Our key
observation is that owing to the rarity of anomalies, they typically exhibit
more pronounced differences from their sub-adjacent neighborhoods than from
their immediate vicinities. By focusing the attention on the sub-adjacent
areas, we make the reconstruction of anomalies more challenging, thereby
enhancing their detectability. Technically, our approach concentrates attention
on the non-diagonal areas of the attention matrix by enlarging the
corresponding elements in the training stage. To facilitate the implementation
of the desired attention matrix pattern, we adopt linear attention because of
its flexibility and adaptability. Moreover, a learnable mapping function is
proposed to improve the performance of linear attention. Empirically, the
Sub-Adjacent Transformer achieves state-of-the-art performance across six
real-world anomaly detection benchmarks, covering diverse fields such as server
monitoring, space exploration, and water treatment.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要