Combining contextual neural networks for time series classification.

Neurocomputing(2020)

引用 16|浏览64
暂无评分
摘要
Ten years ago, linear models were applied in various domains. Before application of the algorithms, several current studies extracted features presumed to represent parochial markings from the data using engineering techniques. Recently the deep learning domain offered opportunities to directly feed data into the model without any extensive hand-crafted feature engineering techniques. In this paper, the proposed framework does the feature extraction in a non-supervised (i.e. self-supervised) manner using both Contextual Long Short-Term Memory (CLSTM) and Contextual Convolutional Neural Networks (CCNN). We can then concatenate data obtained from the CLSTM and CCNN blocks, feed it into the Attention block, pass it through the Multilayer Perceptron (MLP) block, ultimately passing it through a terminal layer for classification. The task involved here was non-trivial as there is a major challenge in implementing our model to solve the time series classification (TSC) problem: overfitting. We deal with this challenge as follows; firstly, we adjusted the number of neurons in each of the stages. Secondly we introduced dropouts after every layer in each stage of this model. Finally experiments regarding the University of California Riverside (UCR) dataset indicates the model’s superiority.
更多
查看译文
关键词
Time series classification,Contextual convolutional neural networks,Contextual long short-term memory,Attention,Multilayer perceptron
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要