HairFastGAN: Realistic and Robust Hair Transfer with a Fast Encoder-Based Approach
CoRR(2024)
Abstract
Our paper addresses the complex task of transferring a hairstyle from a
reference image to an input photo for virtual hair try-on. This task is
challenging due to the need to adapt to various photo poses, the sensitivity of
hairstyles, and the lack of objective metrics. The current state of the art
hairstyle transfer methods use an optimization process for different parts of
the approach, making them inexcusably slow. At the same time, faster
encoder-based models are of very low quality because they either operate in
StyleGAN's W+ space or use other low-dimensional image generators.
Additionally, both approaches have a problem with hairstyle transfer when the
source pose is very different from the target pose, because they either don't
consider the pose at all or deal with it inefficiently. In our paper, we
present the HairFast model, which uniquely solves these problems and achieves
high resolution, near real-time performance, and superior reconstruction
compared to optimization problem-based methods. Our solution includes a new
architecture operating in the FS latent space of StyleGAN, an enhanced
inpainting approach, and improved encoders for better alignment, color
transfer, and a new encoder for post-processing. The effectiveness of our
approach is demonstrated on realism metrics after random hairstyle transfer and
reconstruction when the original hairstyle is transferred. In the most
difficult scenario of transferring both shape and color of a hairstyle from
different images, our method performs in less than a second on the Nvidia V100.
Our code is available at https://github.com/AIRI-Institute/HairFastGAN.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined