AdaDiff: Accelerating Diffusion Models through Step-Wise Adaptive Computation
arxiv(2023)
摘要
Diffusion models achieve great success in generating diverse and
high-fidelity images, yet their widespread application, especially in real-time
scenarios, is hampered by their inherently slow generation speed. The slow
generation stems from the necessity of multi-step network inference. While some
certain predictions benefit from the full computation of the model in each
sampling iteration, not every iteration requires the same amount of
computation, potentially leading to inefficient computation. Unlike typical
adaptive computation challenges that deal with single-step generation problems,
diffusion processes with a multi-step generation need to dynamically adjust
their computational resource allocation based on the ongoing assessment of each
step's importance to the final image output, presenting a unique set of
challenges. In this work, we propose AdaDiff, an adaptive framework that
dynamically allocates computation resources in each sampling step to improve
the generation efficiency of diffusion models. To assess the effects of changes
in computational effort on image quality, we present a timestep-aware
uncertainty estimation module (UEM). Integrated at each intermediate layer, the
UEM evaluates the predictive uncertainty. This uncertainty measurement serves
as an indicator for determining whether to terminate the inference process.
Additionally, we introduce an uncertainty-aware layer-wise loss aimed at
bridging the performance gap between full models and their adaptive
counterparts.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要