Difference between revisions of "Recurrent Neural Network (RNN)"

From
Jump to: navigation, search
Line 31: Line 31:
 
* [http://karpathy.github.io/2015/05/21/rnn-effectiveness/ The Unreasonable Effectiveness of Recurrent Neural Networks | Andrej Karpathy - Towards Data Science]
 
* [http://karpathy.github.io/2015/05/21/rnn-effectiveness/ The Unreasonable Effectiveness of Recurrent Neural Networks | Andrej Karpathy - Towards Data Science]
 
* [http://jalammar.github.io/visualizing-neural-machine-translation-mechanics-of-seq2seq-models-with-attention/ Visualizing A Neural Machine Translation Model (Mechanics of Seq2seq Models With Attention) | Jay Alammar]
 
* [http://jalammar.github.io/visualizing-neural-machine-translation-mechanics-of-seq2seq-models-with-attention/ Visualizing A Neural Machine Translation Model (Mechanics of Seq2seq Models With Attention) | Jay Alammar]
 
 
== Recurrent Neural Network (RNN) ==
 
[http://www.youtube.com/results?search_query=recurrent+Neural+Network YouTube Search]
 
[http://www.google.com/search?q=recurrent+Neural+Network+rnn ...Google search]
 
  
 
Recurrent nets are a type of artificial neural network designed to recognize patterns in sequences of data, such as text, genomes, handwriting, the spoken word, or numerical times series data emanating from sensors, stock markets and government agencies. They are arguably the most powerful and useful type of neural network, applicable even to images, which can be decomposed into a series of patches and treated as a sequence. Since recurrent networks possess a certain type of memory, and memory is also part of the human condition, we’ll make repeated analogies to memory in the brain. Recurrent neural networks (RNN) are FFNNs with a time twist: they are not stateless; they have connections between passes, connections through time. Neurons are fed information not just from the previous layer but also from themselves from the previous pass. This means that the order in which you feed the input and train the network matters: feeding it “milk” and then “cookies” may yield different results compared to feeding it “cookies” and then “milk”. One big problem with RNNs is the vanishing (or exploding) gradient problem where, depending on the activation functions used, information rapidly gets lost over time, just like very deep FFNNs lose information in depth. Intuitively this wouldn’t be much of a problem because these are just weights and not neuron states, but the weights through time is actually where the information from the past is stored; if the weight reaches a value of 0 or 1 000 000, the previous state won’t be very informative. RNNs can in principle be used in many fields as most forms of data that don’t actually have a timeline (i.e. unlike sound or video) can be represented as a sequence. A picture or a string of text can be fed one pixel or character at a time, so the time dependent weights are used for what came before in the sequence, not actually from what happened x seconds before. In general, recurrent networks are a good choice for advancing or completing information, such as autocompletion. Elman, Jeffrey L. “Finding structure in time.” Cognitive science 14.2 (1990): 179-211.
 
Recurrent nets are a type of artificial neural network designed to recognize patterns in sequences of data, such as text, genomes, handwriting, the spoken word, or numerical times series data emanating from sensors, stock markets and government agencies. They are arguably the most powerful and useful type of neural network, applicable even to images, which can be decomposed into a series of patches and treated as a sequence. Since recurrent networks possess a certain type of memory, and memory is also part of the human condition, we’ll make repeated analogies to memory in the brain. Recurrent neural networks (RNN) are FFNNs with a time twist: they are not stateless; they have connections between passes, connections through time. Neurons are fed information not just from the previous layer but also from themselves from the previous pass. This means that the order in which you feed the input and train the network matters: feeding it “milk” and then “cookies” may yield different results compared to feeding it “cookies” and then “milk”. One big problem with RNNs is the vanishing (or exploding) gradient problem where, depending on the activation functions used, information rapidly gets lost over time, just like very deep FFNNs lose information in depth. Intuitively this wouldn’t be much of a problem because these are just weights and not neuron states, but the weights through time is actually where the information from the past is stored; if the weight reaches a value of 0 or 1 000 000, the previous state won’t be very informative. RNNs can in principle be used in many fields as most forms of data that don’t actually have a timeline (i.e. unlike sound or video) can be represented as a sequence. A picture or a string of text can be fed one pixel or character at a time, so the time dependent weights are used for what came before in the sequence, not actually from what happened x seconds before. In general, recurrent networks are a good choice for advancing or completing information, such as autocompletion. Elman, Jeffrey L. “Finding structure in time.” Cognitive science 14.2 (1990): 179-211.
  
 +
Bidirectional Recurrent Neural Network (BiRNN) look exactly the same as its unidirectional counterpart. The difference is that the network is not just connected to the past, but also to the future. Schuster, Mike, and Kuldip K. Paliwal. “Bidirectional recurrent neural networks.” IEEE Transactions on Signal Processing 45.11 (1997): 2673-2681.
  
 
<img src="http://i.stack.imgur.com/mHIsF.png" width="800" height="600">
 
<img src="http://i.stack.imgur.com/mHIsF.png" width="800" height="600">
Line 56: Line 52:
 
<youtube>nFTQ7kHQWtc</youtube>
 
<youtube>nFTQ7kHQWtc</youtube>
 
<youtube>_NMI8peAmNA</youtube>
 
<youtube>_NMI8peAmNA</youtube>
 
 
 
 
 
 
 
 
 
<youtube>DUxYvf1lW4Q</youtube>
 
<youtube>DUxYvf1lW4Q</youtube>
 
<youtube>WCUNPb-5EYI</youtube>
 
<youtube>WCUNPb-5EYI</youtube>
Line 71: Line 59:
 
<youtube>4rG8IsKdC3U</youtube>
 
<youtube>4rG8IsKdC3U</youtube>
 
<youtube>4tlrXYBt50s</youtube>
 
<youtube>4tlrXYBt50s</youtube>
 
== Long / Short Term Memory (LSTM) ==
 
[http://www.youtube.com/results?search_query=LSTM+Long+Short+term+Memory YouTube Search]
 
[http://www.google.com/search?q=LSTM+Long+Short+term+Memory ...Google search]
 
 
* [http://www.analyticsvidhya.com/blog/2017/12/fundamentals-of-deep-learning-introduction-to-lstm/ Essentials of Deep Learning : Introduction to Long Short Term Memory |] [http://www.analyticsvidhya.com/blog/author/pranj52/ Pranjal Srivastava] 10 Dec 2017
 
* [http://towardsdatascience.com/illustrated-guide-to-lstms-and-gru-s-a-step-by-step-explanation-44e9eb85bf21 Illustrated Guide to LSTM’s and GRU’s: A step by step explanation | Michael Nguyen - Towards Data Science]
 
* [[http://towardsdatascience.com/step-by-step-understanding-lstm-autoencoder-layers-ffab055b6352 Step-by-step understanding LSTM Autoencoder layers | Chitta Ranjan - Towards Data Science]
 
 
 
<img src="http://cdn-images-1.medium.com/max/1600/1*TaLAT-_y8KRT5jT011g-Pw.png" width="700" height="475">
 
 
 
<img src="http://s3-ap-south-1.amazonaws.com/av-blog-media/wp-content/uploads/2017/12/10131302/13-768x295.png" width="700" height="375">
 
 
 
<img src="http://cdn-images-1.medium.com/max/800/1*e4_3OBFWnPU7oi0hXBiVWQ.png" width="700" height="475">
 
 
To combat the vanishing / exploding gradient problem by introducing gates and an explicitly defined memory cell. These are inspired mostly by circuitry, not so much biology. Each neuron has a memory cell and three gates: input, output and forget. The function of these gates is to safeguard the information by stopping or allowing the flow of it. The input gate determines how much of the information from the previous layer gets stored in the cell. The output layer takes the job on the other end and determines how much of the next layer gets to know about the state of this cell. The forget gate seems like an odd inclusion at first but sometimes it’s good to forget: if it’s learning a book and a new chapter begins, it may be necessary for the network to forget some characters from the previous chapter. LSTMs have been shown to be able to learn complex sequences, such as writing like Shakespeare or composing primitive music. Note that each of these gates has a weight to a cell in the previous neuron, so they typically require more resources to run.  Hochreiter, Sepp, and Jürgen Schmidhuber. “Long short-term memory.” Neural computation 9.8 (1997): 1735-1780.
 
 
http://www.asimovinstitute.org/wp-content/uploads/2016/09/lstm.png
 
 
<youtube>93rzMHtYT_0</youtube>
 
<youtube>9zhrxE5PQgY</youtube>
 
<youtube>l4X-kZjl1gs</youtube>
 
<youtube>xPotjBiIFFA</youtube>
 
 
 
=== Bidirectional ===
 
[http://www.youtube.com/results?search_query=BiLSTM+Bidirectional+Long+Short+term+Memory+Gated+Recurrent+Unit+BiGRU+BiRNN+Recurrent+network YouTube Search]
 
[http://www.google.com/search?q=BiLSTM+Bidirectional+Long+Short+term+Memory+Gated+Recurrent+Unit+BiGRU+BiRNN+Recurrent+network ...Google search]
 
 
NOTE: Bidirectional Long/Short-Term Memory (BiLSTM), Bidirectional Gated Recurrent Unit (BiGRU), and Bidirectional Recurrent Neural Network (BiRNN) look exactly the same as their unidirectional counterparts. The difference is that these networks are not just connected to the past, but also to the future. As an example, unidirectional LSTMs might be trained to predict the word “fish” by being fed the letters one by one, where the recurrent connections through time remember the last value. A BiLSTM would also be fed the next letter in the sequence on the backward pass, giving it access to future information. This trains the network to fill in gaps instead of advancing information, so instead of expanding an image on the edge, it could fill a hole in the middle of an image.  Schuster, Mike, and Kuldip K. Paliwal. “Bidirectional recurrent neural networks.” IEEE Transactions on Signal Processing 45.11 (1997): 2673-2681.
 

Revision as of 13:40, 11 June 2020

YouTube Search ...Google search

Recurrent nets are a type of artificial neural network designed to recognize patterns in sequences of data, such as text, genomes, handwriting, the spoken word, or numerical times series data emanating from sensors, stock markets and government agencies. They are arguably the most powerful and useful type of neural network, applicable even to images, which can be decomposed into a series of patches and treated as a sequence. Since recurrent networks possess a certain type of memory, and memory is also part of the human condition, we’ll make repeated analogies to memory in the brain. Recurrent neural networks (RNN) are FFNNs with a time twist: they are not stateless; they have connections between passes, connections through time. Neurons are fed information not just from the previous layer but also from themselves from the previous pass. This means that the order in which you feed the input and train the network matters: feeding it “milk” and then “cookies” may yield different results compared to feeding it “cookies” and then “milk”. One big problem with RNNs is the vanishing (or exploding) gradient problem where, depending on the activation functions used, information rapidly gets lost over time, just like very deep FFNNs lose information in depth. Intuitively this wouldn’t be much of a problem because these are just weights and not neuron states, but the weights through time is actually where the information from the past is stored; if the weight reaches a value of 0 or 1 000 000, the previous state won’t be very informative. RNNs can in principle be used in many fields as most forms of data that don’t actually have a timeline (i.e. unlike sound or video) can be represented as a sequence. A picture or a string of text can be fed one pixel or character at a time, so the time dependent weights are used for what came before in the sequence, not actually from what happened x seconds before. In general, recurrent networks are a good choice for advancing or completing information, such as autocompletion. Elman, Jeffrey L. “Finding structure in time.” Cognitive science 14.2 (1990): 179-211.

Bidirectional Recurrent Neural Network (BiRNN) look exactly the same as its unidirectional counterpart. The difference is that the network is not just connected to the past, but also to the future. Schuster, Mike, and Kuldip K. Paliwal. “Bidirectional recurrent neural networks.” IEEE Transactions on Signal Processing 45.11 (1997): 2673-2681.

rnn.png