Generating Images with 3D Annotations Using Diffusion Models
International Conference on Learning Representations(2023)
Abstract
Diffusion models have emerged as a powerful generative method, capable of
producing stunning photo-realistic images from natural language descriptions.
However, these models lack explicit control over the 3D structure in the
generated images. Consequently, this hinders our ability to obtain detailed 3D
annotations for the generated images or to craft instances with specific poses
and distances. In this paper, we propose 3D Diffusion Style Transfer (3D-DST),
which incorporates 3D geometry control into diffusion models. Our method
exploits ControlNet, which extends diffusion models by using visual prompts in
addition to text prompts. We generate images of the 3D objects taken from 3D
shape repositories (e.g., ShapeNet and Objaverse), render them from a variety
of poses and viewing directions, compute the edge maps of the rendered images,
and use these edge maps as visual prompts to generate realistic images. With
explicit 3D geometry control, we can easily change the 3D structures of the
objects in the generated images and obtain ground-truth 3D annotations
automatically. This allows us to improve a wide range of vision tasks, e.g.,
classification and 3D pose estimation, in both in-distribution (ID) and
out-of-distribution (OOD) settings. We demonstrate the effectiveness of our
method through extensive experiments on ImageNet-100/200, ImageNet-R,
PASCAL3D+, ObjectNet3D, and OOD-CV. The results show that our method
significantly outperforms existing methods, e.g., 3.8 percentage points on
ImageNet-100 using DeiT-B.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined