Scaling Laws of Synthetic Images for Model Training ... for Now
CoRR(2023)
摘要
Recent significant advances in text-to-image models unlock the possibility of
training vision systems using synthetic images, potentially overcoming the
difficulty of collecting curated data at scale. It is unclear, however, how
these models behave at scale, as more synthetic data is added to the training
set. In this paper we study the scaling laws of synthetic images generated by
state of the art text-to-image models, for the training of supervised models:
image classifiers with label supervision, and CLIP with language supervision.
We identify several factors, including text prompts, classifier-free guidance
scale, and types of text-to-image models, that significantly affect scaling
behavior. After tuning these factors, we observe that synthetic images
demonstrate a scaling trend similar to, but slightly less effective than, real
images in CLIP training, while they significantly underperform in scaling when
training supervised image classifiers. Our analysis indicates that the main
reason for this underperformance is the inability of off-the-shelf
text-to-image models to generate certain concepts, a limitation that
significantly impairs the training of image classifiers. Our findings also
suggest that scaling synthetic data can be particularly effective in scenarios
such as: (1) when there is a limited supply of real images for a supervised
problem (e.g., fewer than 0.5 million images in ImageNet), (2) when the
evaluation dataset diverges significantly from the training data, indicating
the out-of-distribution scenario, or (3) when synthetic data is used in
conjunction with real images, as demonstrated in the training of CLIP models.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要