Difference between revisions of "Backpropagation"

From
Jump to: navigation, search
m
m
 
(2 intermediate revisions by the same user not shown)
Line 17: Line 17:
 
[https://www.google.com/search?q=Backpropagation+deep+machine+learning+ML ...Google search]
 
[https://www.google.com/search?q=Backpropagation+deep+machine+learning+ML ...Google search]
  
* [[Backpropagation]] ...[[Gradient Descent Optimization & Challenges]]  ...[[Feed Forward Neural Network (FF or FFNN)]] ...[[Forward-Forward]]
+
* [[Backpropagation]] ... [[Feed Forward Neural Network (FF or FFNN)|FFNN]] ... [[Forward-Forward]] ... [[Activation Functions]] ...[[Softmax]] ... [[Loss]] ... [[Boosting]] ... [[Gradient Descent Optimization & Challenges|Gradient Descent]] ... [[Algorithm Administration#Hyperparameter|Hyperparameter]] ... [[Manifold Hypothesis]] ... [[Principal Component Analysis (PCA)|PCA]]
* [[Activation Functions]]
 
 
* [[Objective vs. Cost vs. Loss vs. Error Function]]
 
* [[Objective vs. Cost vs. Loss vs. Error Function]]
 +
* [[Optimization Methods]]
 
* [https://en.wikipedia.org/wiki/Backpropagation Wikipedia]
 
* [https://en.wikipedia.org/wiki/Backpropagation Wikipedia]
* [[Manifold Hypothesis]]
 
 
* [https://neuralnetworksanddeeplearning.com/chap2.html How the backpropagation algorithm works]
 
* [https://neuralnetworksanddeeplearning.com/chap2.html How the backpropagation algorithm works]
 
* [https://hmkcode.github.io/ai/backpropagation-step-by-step/ Backpropagation Step by Step]
 
* [https://hmkcode.github.io/ai/backpropagation-step-by-step/ Backpropagation Step by Step]

Latest revision as of 09:30, 6 August 2023

Youtube search... ...Google search


The primary algorithm for performing gradient descent on neural networks. First, the output values of each node are calculated (and cached) in a forward pass. Then, the partial derivative of the error with respect to each parameter is calculated in a backward pass through the graph. Machine Learning Glossary | Google


backpropagation.png