Image restoration has been well studied for decades, yet most existing research efforts are dedicated to specific degradation categories (as shown in the table above), leading to limited feasibility for real world cases with complex, heterogeneous degradations.
Some existing works investigate the blind face restoration problem. However, these works often rely on other semantic guidances, such as landmark keypoints, parsing maps, or even other high quality faces (GFRNet), making the network focus on the frontal face and neglect the background:
Our network HiFaceGAN is composed of several nested collaborative suppression and relenishment (CSR) units that each specializes in restoring a specific semantic aspects, leading to a systematic hierarchical face renovation, as displayed here:
Compare to original SPADE, stylegan or VAE, our framework allows the model learn the optimial feature representation and decomposition through an end-to-end learning process, hence can achieve superior renovation performance that most baselines.
Of course. Although we name our model "HiFaceGAN", neither the problem formulation nor the implementation relies on facial prior, thus it can adapt to other natural images, no modification required, and achieve satisfactory performances. Still, the increasing complexity in textures is a huge challenge for our framework(and presumably for any other generative models). Here we show the result of our HiFaceGAN for tackling joint rain and haze removal on outdoor scene images (the model has been fine-tuned):