Difference between revisions of "Gradient Descent Optimization & Challenges"
| Line 2: | Line 2: | ||
* [[Backpropagation]] | * [[Backpropagation]] | ||
* [[Objective vs. Cost vs. Loss vs. Error Function]] | * [[Objective vs. Cost vs. Loss vs. Error Function]] | ||
| + | * [[Topology and Weight Evolving Artificial Neural Network (TWEANN)]] | ||
== Gradient Descent - Stochastic (SGD), Batch (BGD) & Mini-Batch == | == Gradient Descent - Stochastic (SGD), Batch (BGD) & Mini-Batch == | ||
Revision as of 21:28, 19 January 2019
- Gradient Boosting Algorithms
- Backpropagation
- Objective vs. Cost vs. Loss vs. Error Function
- Topology and Weight Evolving Artificial Neural Network (TWEANN)
Gradient Descent - Stochastic (SGD), Batch (BGD) & Mini-Batch
Vanishing & Exploding Gradients Problems
Vanishing & Exploding Gradients Challenges with Long Short-Term Memory (LSTM) and Recurrent Neural Networks (RNN)