GenesisTex: Adapting Image Denoising Diffusion to Texture Space
CVPR 2024(2024)
摘要
We present GenesisTex, a novel method for synthesizing textures for 3D
geometries from text descriptions. GenesisTex adapts the pretrained image
diffusion model to texture space by texture space sampling. Specifically, we
maintain a latent texture map for each viewpoint, which is updated with
predicted noise on the rendering of the corresponding viewpoint. The sampled
latent texture maps are then decoded into a final texture map. During the
sampling process, we focus on both global and local consistency across multiple
viewpoints: global consistency is achieved through the integration of style
consistency mechanisms within the noise prediction network, and low-level
consistency is achieved by dynamically aligning latent textures. Finally, we
apply reference-based inpainting and img2img on denser views for texture
refinement. Our approach overcomes the limitations of slow optimization in
distillation-based methods and instability in inpainting-based methods.
Experiments on meshes from various sources demonstrate that our method
surpasses the baseline methods quantitatively and qualitatively.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要