Difference between revisions of "Generative Facial Prior-Generative Adversarial Network (GFP-GAN)"
m |
m |
||
| Line 10: | Line 10: | ||
* [[Generative Adversarial Network (GAN)]] | * [[Generative Adversarial Network (GAN)]] | ||
* [[Semi-Supervised Learning with Generative Adversarial Network (SSL-GAN)]] | * [[Semi-Supervised Learning with Generative Adversarial Network (SSL-GAN)]] | ||
| − | * [[Context-Conditional Generative Adversarial Network (CC-GAN)]] | + | * [[Context-Conditional Generative Adversarial Network (CC-GAN)]] ... [[Context]] |
* [[Image-to-Image Translation]] | * [[Image-to-Image Translation]] | ||
* [[Autoencoder (AE) / Encoder-Decoder]] | * [[Autoencoder (AE) / Encoder-Decoder]] | ||
Revision as of 20:49, 17 May 2023
YouTube search... ...Google search
- Generative Adversarial Network (GAN)
- Semi-Supervised Learning with Generative Adversarial Network (SSL-GAN)
- Context-Conditional Generative Adversarial Network (CC-GAN) ... Context
- Image-to-Image Translation
- Autoencoder (AE) / Encoder-Decoder
- Variational Autoencoder (VAE)
- Video/Image
- Generative AI ... Conversational AI ... OpenAI's ChatGPT ... Perplexity ... Microsoft's Bing ... You ...Google's Bard ... Baidu's Ernie
- Towards Real-World Blind Face Restoration with Generative Facial Prior Xintao Wang, Yu Li, Honglun Zhang, Ying Shan
- Photo restoration with GFP-GAN Demo
- GFP-GAN Code
Framework that leverages the rich and diverse generative facial prior for the challenging blind face restoration task. This prior is incorporated into the restoration process with channel-split spatial feature transform layers, allowing us to achieve a good balance of realness and fidelity. Extensive comparisons demonstrate the superior capability of GFP-GAN in joint face restoration and color enhancement for real-world images, outperforming prior art. Conventional methods fine-tune an existing AI model to restore images by gauging differences between the artificial and real photos. That frequently leads to low-quality results, the scientists said. The new approach uses a pre-trained version of an existing model (NVIDIA's StyleGAN-2) to inform the team's own model at multiple stages during the image generation process. The technique aims to preserve the "identity" of people in a photo, with a particular focus on facial features like eyes and mouths.