Learned representation-guided diffusion models for large-image generation
arxiv(2023)
摘要
To synthesize high-fidelity samples, diffusion models typically require
auxiliary data to guide the generation process. However, it is impractical to
procure the painstaking patch-level annotation effort required in specialized
domains like histopathology and satellite imagery; it is often performed by
domain experts and involves hundreds of millions of patches. Modern-day
self-supervised learning (SSL) representations encode rich semantic and visual
information. In this paper, we posit that such representations are expressive
enough to act as proxies to fine-grained human labels. We introduce a novel
approach that trains diffusion models conditioned on embeddings from SSL. Our
diffusion models successfully project these features back to high-quality
histopathology and remote sensing images. In addition, we construct larger
images by assembling spatially consistent patches inferred from SSL embeddings,
preserving long-range dependencies. Augmenting real data by generating
variations of real images improves downstream classifier accuracy for
patch-level and larger, image-scale classification tasks. Our models are
effective even on datasets not encountered during training, demonstrating their
robustness and generalizability. Generating images from learned embeddings is
agnostic to the source of the embeddings. The SSL embeddings used to generate a
large image can either be extracted from a reference image, or sampled from an
auxiliary model conditioned on any related modality (e.g. class labels, text,
genomic data). As proof of concept, we introduce the text-to-large image
synthesis paradigm where we successfully synthesize large pathology and
satellite images out of text descriptions.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要