LAPTOP-Diff: Layer Pruning and Normalized Distillation for Compressing Diffusion Models
CoRR(2024)
Abstract
In the era of AIGC, the demand for low-budget or even on-device applications
of diffusion models emerged. In terms of compressing the Stable Diffusion
models (SDMs), several approaches have been proposed, and most of them
leveraged the handcrafted layer removal methods to obtain smaller U-Nets, along
with knowledge distillation to recover the network performance. However, such a
handcrafting manner of layer removal is inefficient and lacks scalability and
generalization, and the feature distillation employed in the retraining phase
faces an imbalance issue that a few numerically significant feature loss terms
dominate over others throughout the retraining process. To this end, we
proposed the layer pruning and normalized distillation for compressing
diffusion models (LAPTOP-Diff). We, 1) introduced the layer pruning method to
compress SDM's U-Net automatically and proposed an effective one-shot pruning
criterion whose one-shot performance is guaranteed by its good additivity
property, surpassing other layer pruning and handcrafted layer removal methods,
2) proposed the normalized feature distillation for retraining, alleviated the
imbalance issue. Using the proposed LAPTOP-Diff, we compressed the U-Nets of
SDXL and SDM-v1.5 for the most advanced performance, achieving a minimal 4.0
decline in PickScore at a pruning ratio of 50
minimal PickScore decline is 8.2
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined