Difference between revisions of "Optimization Methods"

From
Jump to: navigation, search
Line 10: Line 10:
 
* [[Natural Language Processing (NLP)]]
 
* [[Natural Language Processing (NLP)]]
 
* [[Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and Recurrent Neural Network (RNN)]]
 
* [[Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and Recurrent Neural Network (RNN)]]
* [[Average-SGD Weight-Dropped LSTM (AWD-LSTM)]]
+
** [[Average-SGD Weight-Dropped LSTM (AWD-LSTM)]]
 
* Gradient [[Boosting]] Algorithms
 
* Gradient [[Boosting]] Algorithms
  

Revision as of 07:11, 30 June 2019

Youtube search... ...Google search

Methods:

  • Stochastic gradient descent (SGD) (with and without momentum)
  • L-BGFS
  • Adagrad
  • Adadelta
  • Root Mean Squared (RMSprop)
  • Adam
  • Hessian-free (HF)