Taming Stable Diffusion for Text to 360$^{\circ}$ Panorama Image Generation
CVPR 2024(2024)
摘要
Generative models, e.g., Stable Diffusion, have enabled the creation of
photorealistic images from text prompts. Yet, the generation of 360-degree
panorama images from text remains a challenge, particularly due to the dearth
of paired text-panorama data and the domain gap between panorama and
perspective images. In this paper, we introduce a novel dual-branch diffusion
model named PanFusion to generate a 360-degree image from a text prompt. We
leverage the stable diffusion model as one branch to provide prior knowledge in
natural image generation and register it to another panorama branch for
holistic image generation. We propose a unique cross-attention mechanism with
projection awareness to minimize distortion during the collaborative denoising
process. Our experiments validate that PanFusion surpasses existing methods
and, thanks to its dual-branch structure, can integrate additional constraints
like room layout for customized panorama outputs. Code is available at
https://chengzhag.github.io/publication/panfusion.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要