Unsupervised Multi-modal Feature Alignment for Time Series Representation Learning
CoRR(2023)
摘要
In recent times, the field of unsupervised representation learning (URL) for
time series data has garnered significant interest due to its remarkable
adaptability across diverse downstream applications. Unsupervised learning
goals differ from downstream tasks, making it tricky to ensure downstream task
utility by focusing only on temporal feature characterization. Researchers have
proposed multiple transformations to extract discriminative patterns implied in
informative time series, trying to fill the gap. Despite the introduction of a
variety of feature engineering techniques, e.g. spectral domain, wavelet
transformed features, features in image form and symbolic features etc. the
utilization of intricate feature fusion methods and dependence on heterogeneous
features during inference hampers the scalability of the solutions. To address
this, our study introduces an innovative approach that focuses on aligning and
binding time series representations encoded from different modalities, inspired
by spectral graph theory, thereby guiding the neural encoder to uncover latent
pattern associations among these multi-modal features. In contrast to
conventional methods that fuse features from multiple modalities, our proposed
approach simplifies the neural architecture by retaining a single time series
encoder, consequently leading to preserved scalability. We further demonstrate
and prove mechanisms for the encoder to maintain better inductive bias. In our
experimental evaluation, we validated the proposed method on a diverse set of
time series datasets from various domains. Our approach outperforms existing
state-of-the-art URL methods across diverse downstream tasks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要