Layered 3D Human Generation via Semantic-Aware Diffusion Model
arxiv(2023)
摘要
The generation of 3D clothed humans has attracted increasing attention in
recent years. However, existing work cannot generate layered high-quality 3D
humans with consistent body structures. As a result, these methods are unable
to arbitrarily and separately change and edit the body and clothing of the
human. In this paper, we propose a text-driven layered 3D human generation
framework based on a novel physically-decoupled semantic-aware diffusion model.
To keep the generated clothing consistent with the target text, we propose a
semantic-confidence strategy for clothing that can eliminate the non-clothing
content generated by the model. To match the clothing with different body
shapes, we propose a SMPL-driven implicit field deformation network that
enables the free transfer and reuse of clothing. Besides, we introduce uniform
shape priors based on the SMPL model for body and clothing, respectively, which
generates more diverse 3D content without being constrained by specific
templates. The experimental results demonstrate that the proposed method not
only generates 3D humans with consistent body structures but also allows free
editing in a layered manner. The source code will be made public.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要