Difference between revisions of "Optimization Methods"
| Line 1: | Line 1: | ||
| + | {{#seo: | ||
| + | |title=PRIMO.ai | ||
| + | |titlemode=append | ||
| + | |keywords=artificial, intelligence, machine, learning, models, algorithms, data, singularity, moonshot, Tensorflow, Google, Nvidia, Microsoft, Azure, Amazon, AWS | ||
| + | |description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools | ||
| + | }} | ||
[http://www.youtube.com/results?search_query=Optimization+LSTM+SGD+Adagrad+Adadelta+RMSprop+Adam+BGFS Youtube search...] | [http://www.youtube.com/results?search_query=Optimization+LSTM+SGD+Adagrad+Adadelta+RMSprop+Adam+BGFS Youtube search...] | ||
| + | [http://www.google.com/search?q=Optimization+LSTM+SGD+Adagrad+Adadelta+RMSprop+Adam+BGF+machine+learning+ML+artificial+intelligence ...Google search] | ||
* [[Natural Language Processing (NLP)]] | * [[Natural Language Processing (NLP)]] | ||
Revision as of 13:34, 3 February 2019
Youtube search... ...Google search
- Natural Language Processing (NLP)
- Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and Recurrent Neural Network (RNN)
- Average-SGD Weight-Dropped LSTM (AWD-LSTM)
- Gradient Boosting Algorithms
Methods:
- Stochastic gradient descent (SGD) (with and without momentum)
- L-BGFS
- Adagrad
- Adadelta
- Root Mean Squared (RMSprop)
- Adam
- Hessian-free (HF)