Difference between revisions of "Meta-Learning"
| Line 21: | Line 21: | ||
*** [[Hyperparameter]] optimization | *** [[Hyperparameter]] optimization | ||
** [[Reinforcement Learning (RL)]] | ** [[Reinforcement Learning (RL)]] | ||
| − | ** [[Evolutionary Computation / Genetic Algorithms]] | + | ** [[Evolutionary Computation / Genetic Algorithms | Evolution]] |
| + | * [[Gradient Descent Optimization & Challenges | Gradients]] - the whole training process is differentiable; 'unroll optimization, compute gradients, then [[Average-Stochastic Gradient Descent (SGD) Weight-Dropped LSTM (AWD-LSTM) | SGD]]. | ||
Revision as of 10:46, 23 February 2020
YouTube search... ...Google search
- Learning Techniques
- Meta-Learning Update Rules for Unsupervised Representation Learning | Luke Metz, Niru Maheswaranathan, Brian Cheung, Jascha Sohl-Dickstein
- From zero to research — An introduction to Meta-learning | Thomas Wolf - Medium
“learning how to learn”... the use of machine learning algorithms to assist in the training and optimization of other machine learning models. What is Meta-Learning? | Daniel Nelson - Unite.ai
Outer Training Methods -
- Black box
- Random search
- Hyperparameter optimization
- Reinforcement Learning (RL)
- Evolution
- Random search
- Gradients - the whole training process is differentiable; 'unroll optimization, compute gradients, then SGD.