Difference between revisions of "Gated Recurrent Unit (GRU)"

From
Jump to: navigation, search
Line 12: Line 12:
 
* [[Recurrent Neural Network (RNN)]]
 
* [[Recurrent Neural Network (RNN)]]
 
* [http://towardsdatascience.com/animated-rnn-lstm-and-gru-ef124d06cf45 Animated RNN, LSTM and GRU | Raimi Karim - Towards Data Science]
 
* [http://towardsdatascience.com/animated-rnn-lstm-and-gru-ef124d06cf45 Animated RNN, LSTM and GRU | Raimi Karim - Towards Data Science]
 +
 +
a gating mechanism in [[Recurrent Neural Network (RNN)]] The GRU is like a long short-term memory (LSTM) with forget gate[2] but has fewer parameters than LSTM, as it lacks an output gate.[3] GRU's performance on certain tasks of polyphonic music modeling and speech signal modeling was found to be similar to that of LSTM. GRUs have been shown to exhibit even better performance on certain smaller datasets. [http://en.wikipedia.org/wiki/Gated_recurrent_unit Gated Recurrent Unit | Wikipedia]
 +
 +
http://upload.wikimedia.org/wikipedia/commons/thumb/3/37/Gated_Recurrent_Unit%2C_base_type.svg/220px-Gated_Recurrent_Unit%2C_base_type.svg.png
  
 
<youtube>xSCy3q2ts44</youtube>
 
<youtube>xSCy3q2ts44</youtube>

Revision as of 19:09, 30 June 2019

YouTube Search ...Google search

a gating mechanism in Recurrent Neural Network (RNN) The GRU is like a long short-term memory (LSTM) with forget gate[2] but has fewer parameters than LSTM, as it lacks an output gate.[3] GRU's performance on certain tasks of polyphonic music modeling and speech signal modeling was found to be similar to that of LSTM. GRUs have been shown to exhibit even better performance on certain smaller datasets. Gated Recurrent Unit | Wikipedia

220px-Gated_Recurrent_Unit%2C_base_type.svg.png