- AI Solver
- Generative Query Network (GQN)
- Discriminative vs. Generative
- Natural Language Generation (NLG)
- Emergence from Analogies
- Generative Model | Wikipedia
- Synthetic Labeling
- Demos, generating...
- Music: Generating Piano Music with Transformer | I. Simon, A. Huang, J. Engel, C. "Fjord" Hawthorne - Google on Colab play with pretrained Transformer models for piano music generation, based on the Music Transformer model
- Faces: TF-Hub generative image model | The TensorFlow Hub Authors - Google use of a TF-Hub module based on a generative adversarial network (GAN). The module maps from N-dimensional vectors, called latent space, to RGB images.
- 3D Objects: 3D Style Transfer | Google uses Lucid to implement style transfer from a textured 3D model and a style image onto a new texture for the 3D model by using a Differentiable Image Parameterization.
- Three main types:
- Autoencoder (AE) / Encoder-Decoder
- Sequence Models
- Adversarial Networks
More formally, given a set of data instances X and a set of labels Y:
- Generative models capture the joint probability p(X, Y), or just p(X) if there are no labels.
- Discriminative models capture the conditional probability p(Y | X).
A generative model includes the distribution of the data itself, and tells you how likely a given example is. For example, models that predict the next word in a sequence are typically generative models (usually much simpler than GANs) because they can assign a probability to a sequence of words.
- Generative models can generate new data instances.
- Discriminative models discriminate between different kinds of data instances.
Deep Generative Modeling
Generative Modeling Language