Context-Conditional Generative Adversarial Network (CC-GAN)

From
Jump to: navigation, search

YouTube search... ...Google search

A method for harnessing unlabeled image data based on image in-painting. A generative model is trained to generate pixels within a missing hole, based on the context provided by surrounding parts of the image. These in-painted images are then used in an adversarial setting (Goodfellow et al., 2014) to train a large discriminator model whose task is to determine if the image was real (from the unlabeled training set) or fake (an in-painted image). The realistic looking fake examples provided by the generative model cause the discriminator to learn features that generalize to the related task of classifying objects. Thus adversarial training for the in-painting task can be used to regularize large discriminative models during supervised training on a handful of labeled images. Independently developed context-encoder approach of Pathak et al. (2016). This introduces an encoder-decoder framework, that is used to in-paint images where a patch has been randomly removed. Semi-Supervised Learning with Context-Conditional Generative Adversarial Networks | Emily Denton, Sam Gross, Rob Fergus

Figure below: (a) Context-encoder of Pathak et al. (2016), configured for object classification task. (b) Semi-supervised learning with GANs (SSL-GAN). (c) Semi-supervised learning with CC-GANs. In (a-c) the blue network indicates the feature representation being learned (encoder network in the context-encoder model and discriminator network in the GAN and CC-GAN models).

2-Figure1-1.png