Empowering Time Series Analysis with Large Language Models: A Survey
arxiv(2024)
摘要
Recently, remarkable progress has been made over large language models
(LLMs), demonstrating their unprecedented capability in varieties of natural
language tasks. However, completely training a large general-purpose model from
the scratch is challenging for time series analysis, due to the large volumes
and varieties of time series data, as well as the non-stationarity that leads
to concept drift impeding continuous model adaptation and re-training. Recent
advances have shown that pre-trained LLMs can be exploited to capture complex
dependencies in time series data and facilitate various applications. In this
survey, we provide a systematic overview of existing methods that leverage LLMs
for time series analysis. Specifically, we first state the challenges and
motivations of applying language models in the context of time series as well
as brief preliminaries of LLMs. Next, we summarize the general pipeline for
LLM-based time series analysis, categorize existing methods into different
groups (i.e., direct query, tokenization, prompt design, fine-tune, and model
integration), and highlight the key ideas within each group. We also discuss
the applications of LLMs for both general and spatial-temporal time series
data, tailored to specific domains. Finally, we thoroughly discuss future
research opportunities to empower time series analysis with LLMs.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要