Difference between revisions of "Variational Autoencoder (VAE)"

From
Jump to: navigation, search
m (BPeat moved page Variational Autoencoder to Variational Autoencoder (VAE) without leaving a redirect)
 
(8 intermediate revisions by the same user not shown)
Line 1: Line 1:
 +
{{#seo:
 +
|title=PRIMO.ai
 +
|titlemode=append
 +
|keywords=artificial, intelligence, machine, learning, models, algorithms, data, singularity, moonshot, Tensorflow, Google, Nvidia, Microsoft, Azure, Amazon, AWS
 +
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools
 +
}}
 
[http://www.youtube.com/results?search_query=Variational+Autoencoders YouTube search...]
 
[http://www.youtube.com/results?search_query=Variational+Autoencoders YouTube search...]
 +
[http://www.google.com/search?q=Variational+Autoencoders+machine+learning+ML+artificial+intelligence ...Google search]
  
 
* [http://www.asimovinstitute.org/author/fjodorvanveen/ Neural Network Zoo | Fjodor Van Veen]
 
* [http://www.asimovinstitute.org/author/fjodorvanveen/ Neural Network Zoo | Fjodor Van Veen]
 +
* [[Autoencoder (AE) / Encoder-Decoder]]
 +
* [[Clustering]]
 +
* [[Generative Tensorial Reinforcement Learning (GENTRL)]]
 +
* [[Deep Belief Network (DBN)]]
 +
* [[Restricted Boltzmann Machine (RBM)]]
 +
* [http://dpkingma.com/sgvb_mnist_demo/demo.html Digit Fantasies by a Deep Generative Model | Durk Kingma]
 +
* [http://vdumoulin.github.io/morphing_faces/ Morphing Faces | Vincent Dumolin]
  
 
Variational autoencoders (VAE) have the same architecture as AEs but are “taught” something else: an approximated probability distribution of the input samples. It’s a bit back to the roots as they are bit more closely related to BMs and RBMs. They do however rely on Bayesian mathematics regarding probabilistic inference and independence, as well as a re-parametrisation trick to achieve this different representation. The inference and independence parts make sense intuitively, but they rely on somewhat complex mathematics. The basics come down to this: take influence into account. If one thing happens in one place and something else happens somewhere else, they are not necessarily related. If they are not related, then the error propagation should consider that. This is a useful approach because neural networks are large graphs (in a way), so it helps if you can rule out influence from some nodes to other nodes as you dive into deeper layers.  Kingma, Diederik P., and Max Welling. “Auto-encoding variational bayes.” arXiv preprint arXiv:1312.6114 (2013).
 
Variational autoencoders (VAE) have the same architecture as AEs but are “taught” something else: an approximated probability distribution of the input samples. It’s a bit back to the roots as they are bit more closely related to BMs and RBMs. They do however rely on Bayesian mathematics regarding probabilistic inference and independence, as well as a re-parametrisation trick to achieve this different representation. The inference and independence parts make sense intuitively, but they rely on somewhat complex mathematics. The basics come down to this: take influence into account. If one thing happens in one place and something else happens somewhere else, they are not necessarily related. If they are not related, then the error propagation should consider that. This is a useful approach because neural networks are large graphs (in a way), so it helps if you can rule out influence from some nodes to other nodes as you dive into deeper layers.  Kingma, Diederik P., and Max Welling. “Auto-encoding variational bayes.” arXiv preprint arXiv:1312.6114 (2013).
  
 +
http://www.asimovinstitute.org/wp-content/uploads/2016/09/vae.png
 +
 +
<youtube>LEetHbPk6b0</youtube>
 
<youtube>iz-TZOEKXzA</youtube>
 
<youtube>iz-TZOEKXzA</youtube>
 
<youtube>9zKuYvjFFS8</youtube>
 
<youtube>9zKuYvjFFS8</youtube>
 
<youtube>ar4Fm1V65Fw</youtube>
 
<youtube>ar4Fm1V65Fw</youtube>
 
<youtube>uaaqyVS9-rM</youtube>
 
<youtube>uaaqyVS9-rM</youtube>

Latest revision as of 21:25, 3 May 2020

YouTube search... ...Google search

Variational autoencoders (VAE) have the same architecture as AEs but are “taught” something else: an approximated probability distribution of the input samples. It’s a bit back to the roots as they are bit more closely related to BMs and RBMs. They do however rely on Bayesian mathematics regarding probabilistic inference and independence, as well as a re-parametrisation trick to achieve this different representation. The inference and independence parts make sense intuitively, but they rely on somewhat complex mathematics. The basics come down to this: take influence into account. If one thing happens in one place and something else happens somewhere else, they are not necessarily related. If they are not related, then the error propagation should consider that. This is a useful approach because neural networks are large graphs (in a way), so it helps if you can rule out influence from some nodes to other nodes as you dive into deeper layers. Kingma, Diederik P., and Max Welling. “Auto-encoding variational bayes.” arXiv preprint arXiv:1312.6114 (2013).

vae.png