Edge Guided Generation Network for Video Prediction

2018 IEEE International Conference on Multimedia and Expo (ICME)(2018)

引用 3|浏览73
暂无评分
摘要
Video prediction is a challenging problem due to the highly complex variation of video appearance and motions. Traditional methods that directly predict pixel values often result in blurring and artifacts. Furthermore, cumulative errors can lead to a sharp drop of prediction quality in long-term prediction. To alleviate the above problems, we propose a novel edge guided video prediction network, which firstly models the dynamic of frame edges and predicts the future frame edges, then generates the future frames under the guidance of the obtained future frame edges. Specifically, our network consists of two modules that are ConvLSTM based edge prediction module and the edge guided frames generation module. The whole network is differentiable and can be trained end-to-end without any supervision effort. Extensive experiments on KTH human action dataset and challenging autonomous driving KITTI dataset demonstrate that our method achieves better results than state-of-the-art methods especially in long-term video predictions.
更多
查看译文
关键词
Video prediction,deep learning,spatial-temporal network,image generation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要