NeRFInvertor: High Fidelity NeRF-GAN Inversion for Single-shot Real Image Animation
Abstract
Nerf-based Generative models have shown impressive capacity in generating high-quality images with consistent 3D geometry. Despite successful synthesis of fake identity images randomly sampled from <PRE_TAG>latent space</POST_TAG>, adopting these models for generating face images of real subjects is still a challenging task due to its so-called <PRE_TAG>inversion issue</POST_TAG>. In this paper, we propose a universal method to surgically fine-tune these <PRE_TAG>NeRF-GAN models</POST_TAG> in order to achieve <PRE_TAG>high-fidelity animation</POST_TAG> of real subjects only by a single image. Given the optimized <PRE_TAG>latent code</POST_TAG> for an <PRE_TAG>out-of-domain real image</POST_TAG>, we employ 2D loss functions on the rendered image to reduce the <PRE_TAG>identity gap</POST_TAG>. Furthermore, our method leverages <PRE_TAG>explicit and implicit 3D regularizations</POST_TAG> using the in-domain neighborhood samples around the optimized <PRE_TAG>latent code</POST_TAG> to remove geometrical and visual artifacts. Our experiments confirm the effectiveness of our method in realistic, high-fidelity, and 3D consistent animation of real faces on multiple <PRE_TAG>NeRF-GAN models</POST_TAG> across different datasets.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper