Difference between revisions of "Forward-Forward"

From
Jump to: navigation, search
m
m
 
(8 intermediate revisions by the same user not shown)
Line 2: Line 2:
 
|title=PRIMO.ai
 
|title=PRIMO.ai
 
|titlemode=append
 
|titlemode=append
|keywords=artificial, intelligence, machine, learning, models, algorithms, data, singularity, moonshot, Tensorflow, Google, Nvidia, Microsoft, Azure, Amazon, AWS  
+
|keywords=ChatGPT, artificial, intelligence, machine, learning, GPT-4, GPT-5, NLP, NLG, NLC, NLU, models, data, singularity, moonshot, Sentience, AGI, Emergence, Moonshot, Explainable, TensorFlow, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Hugging Face, OpenAI, Tensorflow, OpenAI, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Meta, LLM, metaverse, assistants, agents, digital twin, IoT, Transhumanism, Immersive Reality, Generative AI, Conversational AI, Perplexity, Bing, You, Bard, Ernie, prompt Engineering LangChain, Video/Image, Vision, End-to-End Speech, Synthesize Speech, Speech Recognition, Stanford, MIT |description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools  
+
 
 +
<!-- Google tag (gtag.js) -->
 +
<script async src="https://www.googletagmanager.com/gtag/js?id=G-4GCWLBVJ7T"></script>
 +
<script>
 +
  window.dataLayer = window.dataLayer || [];
 +
  function gtag(){dataLayer.push(arguments);}
 +
  gtag('js', new Date());
 +
 
 +
  gtag('config', 'G-4GCWLBVJ7T');
 +
</script>
 
}}
 
}}
[http://www.youtube.com/results?search_query=backpropagation Youtube search...]
+
[https://www.youtube.com/results?search_query=backpropagation Youtube search...]
[http://www.google.com/search?q=Backpropagation+deep+machine+learning+ML ...Google search]
+
[https://www.google.com/search?q=Backpropagation+deep+machine+learning+ML ...Google search]
  
 +
* [[Backpropagation]] ... [[Feed Forward Neural Network (FF or FFNN)|FFNN]] ... [[Forward-Forward]] ... [[Activation Functions]] ...[[Softmax]] ... [[Loss]] ... [[Boosting]] ... [[Gradient Descent Optimization & Challenges|Gradient Descent]] ... [[Algorithm Administration#Hyperparameter|Hyperparameter]] ... [[Manifold Hypothesis]] ... [[Principal Component Analysis (PCA)|PCA]]
 
* [https://www.cs.toronto.edu/~hinton/FFA13.pdf The Forward-Forward Algorithm: Some Prelininary Investigations |] [[Creatives#Geoffry Hinton | Geoffry Hinton]]
 
* [https://www.cs.toronto.edu/~hinton/FFA13.pdf The Forward-Forward Algorithm: Some Prelininary Investigations |] [[Creatives#Geoffry Hinton | Geoffry Hinton]]
* [[Backpropagation]]  ... [[Feed Forward Neural Network (FF or FFNN)]]  ... [[Forward-Forward]]
 
* [[Gradient Descent Optimization & Challenges]]
 
 
* [[Objective vs. Cost vs. Loss vs. Error Function]]
 
* [[Objective vs. Cost vs. Loss vs. Error Function]]
* [http://en.wikipedia.org/wiki/Backpropagation Wikipedia]
+
* [https://en.wikipedia.org/wiki/Backpropagation Wikipedia]
* [[Manifold Hypothesis]]
 
 
* [https://bdtechtalks.com/2022/12/19/forward-forward-algorithm-geoffrey-hinton/ What is the “forward-forward” algorithm, Geoffrey Hinton’s new AI technique? | Ben Dickson - TechTalks]
 
* [https://bdtechtalks.com/2022/12/19/forward-forward-algorithm-geoffrey-hinton/ What is the “forward-forward” algorithm, Geoffrey Hinton’s new AI technique? | Ben Dickson - TechTalks]
  
Line 19: Line 26:
 
(i.e. real) data and the other with negative data which could be generated by the network itself. Each layer has its own objective function which is simply to have high goodness for positive data and low goodness for negative data. The sum of the squared activities in a layer can be used as the goodness but there are many other possibilities, including minus the sum of the squared activities. If the positive and negative passes can be separated in time, the negative passes can be done offline, which makes the learning much simpler in the positive pass and allows video to be pipelined through the network without ever storing activities or stopping to propagate derivatives.
 
(i.e. real) data and the other with negative data which could be generated by the network itself. Each layer has its own objective function which is simply to have high goodness for positive data and low goodness for negative data. The sum of the squared activities in a layer can be used as the goodness but there are many other possibilities, including minus the sum of the squared activities. If the positive and negative passes can be separated in time, the negative passes can be done offline, which makes the learning much simpler in the positive pass and allows video to be pipelined through the network without ever storing activities or stopping to propagate derivatives.
  
 +
<img src="https://miro.medium.com/v2/resize:fit:828/format:webp/1*IwX3pdEu-4OM3XX97z7k-A.png" width="600">
  
https://miro.medium.com/v2/resize:fit:828/format:webp/1*IwX3pdEu-4OM3XX97z7k-A.png
+
[https://medium.com/mlearning-ai/pytorch-implementation-of-forward-forward-algorithm-by-geoffrey-hinton-and-analysis-of-performance-7e4f1a26d70f PyTorch implementation of Geoffrey Hinton’s Forward-Forward algorithm and analysis of performance VS backpropagation | Diego Fiori - Medium]
  
[https://medium.com/mlearning-ai/pytorch-implementation-of-forward-forward-algorithm-by-geoffrey-hinton-and-analysis-of-performance-7e4f1a26d70f PyTorch implementation of Geoffrey Hinton’s Forward-Forward algorithm and analysis of performance VS backpropagation | Diego Fiori - Medium]
 
  
 
<youtube>NWqy_b1OvwQ</youtube>
 
<youtube>NWqy_b1OvwQ</youtube>

Latest revision as of 02:02, 11 July 2023

Youtube search... ...Google search

Introduction of a new learning procedure for neural networks and to demonstrate that it works well enough on a few small problems to be worth serious investigation. The Forward-Forward algorithm replaces the forward and backward passes of backpropagation by two forward passes, one with positive (i.e. real) data and the other with negative data which could be generated by the network itself. Each layer has its own objective function which is simply to have high goodness for positive data and low goodness for negative data. The sum of the squared activities in a layer can be used as the goodness but there are many other possibilities, including minus the sum of the squared activities. If the positive and negative passes can be separated in time, the negative passes can be done offline, which makes the learning much simpler in the positive pass and allows video to be pipelined through the network without ever storing activities or stopping to propagate derivatives.

PyTorch implementation of Geoffrey Hinton’s Forward-Forward algorithm and analysis of performance VS backpropagation | Diego Fiori - Medium