VideoDirectorGPT: Consistent Multi-scene Video Generation via LLM-Guided Planning
arxiv(2023)
摘要
Recent text-to-video (T2V) generation methods have seen significant
advancements. However, the majority of these works focus on producing short
video clips of a single event (i.e., single-scene videos). Meanwhile, recent
large language models (LLMs) have demonstrated their capability in generating
layouts and programs to control downstream visual modules. This prompts an
important question: can we leverage the knowledge embedded in these LLMs for
temporally consistent long video generation? In this paper, we propose
VideoDirectorGPT, a novel framework for consistent multi-scene video generation
that uses the knowledge of LLMs for video content planning and grounded video
generation. Specifically, given a single text prompt, we first ask our video
planner LLM (GPT-4) to expand it into a 'video plan', which includes the scene
descriptions, the entities with their respective layouts, the background for
each scene, and consistency groupings of the entities. Next, guided by this
video plan, our video generator, named Layout2Vid, has explicit control over
spatial layouts and can maintain temporal consistency of entities across
multiple scenes, while being trained only with image-level annotations. Our
experiments demonstrate that our proposed VideoDirectorGPT framework
substantially improves layout and movement control in both single- and
multi-scene video generation and can generate multi-scene videos with
consistency, while achieving competitive performance with SOTAs in open-domain
single-scene T2V generation. Detailed ablation studies, including dynamic
adjustment of layout control strength with an LLM and video generation with
user-provided images, confirm the effectiveness of each component of our
framework and its future potential.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要