Difference between revisions of "Backpropagation"

From
Jump to: navigation, search
m
m
 
Line 19: Line 19:
 
* [[Backpropagation]] ... [[Feed Forward Neural Network (FF or FFNN)|FFNN]] ... [[Forward-Forward]] ... [[Activation Functions]] ...[[Softmax]] ... [[Loss]] ... [[Boosting]] ... [[Gradient Descent Optimization & Challenges|Gradient Descent]] ... [[Algorithm Administration#Hyperparameter|Hyperparameter]] ... [[Manifold Hypothesis]] ... [[Principal Component Analysis (PCA)|PCA]]
 
* [[Backpropagation]] ... [[Feed Forward Neural Network (FF or FFNN)|FFNN]] ... [[Forward-Forward]] ... [[Activation Functions]] ...[[Softmax]] ... [[Loss]] ... [[Boosting]] ... [[Gradient Descent Optimization & Challenges|Gradient Descent]] ... [[Algorithm Administration#Hyperparameter|Hyperparameter]] ... [[Manifold Hypothesis]] ... [[Principal Component Analysis (PCA)|PCA]]
 
* [[Objective vs. Cost vs. Loss vs. Error Function]]
 
* [[Objective vs. Cost vs. Loss vs. Error Function]]
 +
* [[Optimization Methods]]
 
* [https://en.wikipedia.org/wiki/Backpropagation Wikipedia]
 
* [https://en.wikipedia.org/wiki/Backpropagation Wikipedia]
 
* [https://neuralnetworksanddeeplearning.com/chap2.html How the backpropagation algorithm works]
 
* [https://neuralnetworksanddeeplearning.com/chap2.html How the backpropagation algorithm works]

Latest revision as of 09:30, 6 August 2023

Youtube search... ...Google search


The primary algorithm for performing gradient descent on neural networks. First, the output values of each node are calculated (and cached) in a forward pass. Then, the partial derivative of the error with respect to each parameter is calculated in a backward pass through the graph. Machine Learning Glossary | Google


backpropagation.png