VersaT2I: Improving Text-to-Image Models with Versatile Reward
arXiv (Cornell University)(2024)
Abstract
Recent text-to-image (T2I) models have benefited from large-scale andhigh-quality data, demonstrating impressive performance. However, these T2Imodels still struggle to produce images that are aesthetically pleasing,geometrically accurate, faithful to text, and of good low-level quality. Wepresent VersaT2I, a versatile training framework that can boost the performancewith multiple rewards of any T2I model. We decompose the quality of the imageinto several aspects such as aesthetics, text-image alignment, geometry,low-level quality, etc. Then, for every quality aspect, we select high-qualityimages in this aspect generated by the model as the training set to finetunethe T2I model using the Low-Rank Adaptation (LoRA). Furthermore, we introduce agating function to combine multiple quality aspects, which can avoid conflictsbetween different quality aspects. Our method is easy to extend and does notrequire any manual annotation, reinforcement learning, or model architecturechanges. Extensive experiments demonstrate that VersaT2I outperforms thebaseline methods across various quality criteria.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined