Difference between revisions of "(Stacked) Denoising Autoencoder (DAE)"

From
Jump to: navigation, search
m (BPeat moved page Stacked de-noising autoencoders to Stacked De-noising Autoencoder without leaving a redirect)
Line 1: Line 1:
[http://www.youtube.com/results?search_query=stacked+autoencoder YouTube search...]
+
[http://www.youtube.com/results?search_query=Stacked+Denoising+Autoencoder YouTube search...]
 +
 
 +
* [http://www.asimovinstitute.org/author/fjodorvanveen/ Neural Network Zoo | Fjodor Van Veen]
 +
 
 +
Denoising autoencoders (DAE) are AEs where we don’t feed just the input data, but we feed the input data with noise (like making an image more grainy). We compute the error the same way though, so the output of the network is compared to the original input without noise. This encourages the network not to learn details but broader features, as learning smaller features often turns out to be “wrong” due to it constantly changing with noise. Vincent, Pascal, et al. “Extracting and composing robust features with denoising autoencoders.” Proceedings of the 25th international conference on Machine learning. ACM, 2008.
  
 
<youtube>G1qA8z0PmR0</youtube>
 
<youtube>G1qA8z0PmR0</youtube>

Revision as of 19:47, 11 May 2018

YouTube search...

Denoising autoencoders (DAE) are AEs where we don’t feed just the input data, but we feed the input data with noise (like making an image more grainy). We compute the error the same way though, so the output of the network is compared to the original input without noise. This encourages the network not to learn details but broader features, as learning smaller features often turns out to be “wrong” due to it constantly changing with noise. Vincent, Pascal, et al. “Extracting and composing robust features with denoising autoencoders.” Proceedings of the 25th international conference on Machine learning. ACM, 2008.