CARLA: Self-supervised Contrastive Representation Learning for Time Series Anomaly Detection
arxiv(2023)
摘要
One main challenge in time series anomaly detection (TSAD) is the lack of
labelled data in many real-life scenarios. Most of the existing anomaly
detection methods focus on learning the normal behaviour of unlabelled time
series in an unsupervised manner. The normal boundary is often defined tightly,
resulting in slight deviations being classified as anomalies, consequently
leading to a high false positive rate and a limited ability to generalise
normal patterns. To address this, we introduce a novel end-to-end
self-supervised ContrAstive Representation Learning approach for time series
Anomaly detection (CARLA). While existing contrastive learning methods assume
that augmented time series windows are positive samples and temporally distant
windows are negative samples, we argue that these assumptions are limited as
augmentation of time series can transform them to negative samples, and a
temporally distant window can represent a positive sample. Our contrastive
approach leverages existing generic knowledge about time series anomalies and
injects various types of anomalies as negative samples. Therefore, CARLA not
only learns normal behaviour but also learns deviations indicating anomalies.
It creates similar representations for temporally closed windows and distinct
ones for anomalies. Additionally, it leverages the information about
representations' neighbours through a self-supervised approach to classify
windows based on their nearest/furthest neighbours to further enhance the
performance of anomaly detection. In extensive tests on seven major real-world
time series anomaly detection datasets, CARLA shows superior performance over
state-of-the-art self-supervised and unsupervised TSAD methods. Our research
shows the potential of contrastive representation learning to advance time
series anomaly detection.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要