Dynamic Appearance Modeling of Clothed 3D Human Avatars using a Single Camera
CoRR(2023)
摘要
The appearance of a human in clothing is driven not only by the pose but also
by its temporal context, i.e., motion. However, such context has been largely
neglected by existing monocular human modeling methods whose neural networks
often struggle to learn a video of a person with large dynamics due to the
motion ambiguity, i.e., there exist numerous geometric configurations of
clothes that are dependent on the context of motion even for the same pose. In
this paper, we introduce a method for high-quality modeling of clothed 3D human
avatars using a video of a person with dynamic movements. The main challenge
comes from the lack of 3D ground truth data of geometry and its temporal
correspondences. We address this challenge by introducing a novel compositional
human modeling framework that takes advantage of both explicit and implicit
human modeling. For explicit modeling, a neural network learns to generate
point-wise shape residuals and appearance features of a 3D body model by
comparing its 2D rendering results and the original images. This explicit model
allows for the reconstruction of discriminative 3D motion features from UV
space by encoding their temporal correspondences. For implicit modeling, an
implicit network combines the appearance and 3D motion features to decode
high-fidelity clothed 3D human avatars with motion-dependent geometry and
texture. The experiments show that our method can generate a large variation of
secondary motion in a physically plausible way.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要