A Review of Video Generation Approaches

2020 International Conference on Power, Instrumentation, Control and Computing (PICC)(2020)

Cited 3|Views10
No score
Abstract
Generating videos from some initial frames is an appealing field of research in deep learning. There exists an ever expanding foray of approaches to generate long-range and realistic video frame series. Generating videos can help predict trajectories and even model object movements, to enhance autonomous robots. However, there are only a few comprehensive studies that review various approaches on the basis of their relative advantages, disadvantages, and evolution. Hence, this paper presents a detailed overview of Deep Learning based approaches employed to tackle the complex problem of video generation. The approaches involve Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs) and even the Transformer model. Finally, the performance of all the approaches are examined and compared on the BAIR Robot Pushing dataset.
More
Translated text
Key words
Video Generation,Deep Learning,Generative Adversarial Networks,Variational Autoencoders,Transformer
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined