Difference between revisions of "Lifelong Learning"

From
Jump to: navigation, search
m (Text replacement - "* Conversational AI ... ChatGPT | OpenAI ... Bing | Microsoft ... Bard | Google ... Claude | Anthropic ... Perplexity ... You ... Ernie | Baidu" to "* Conversational AI ... [[C...)
(33 intermediate revisions by the same user not shown)
Line 2: Line 2:
 
|title=PRIMO.ai
 
|title=PRIMO.ai
 
|titlemode=append
 
|titlemode=append
|keywords=artificial, intelligence, machine, learning, models, algorithms, data, singularity, moonshot, Tensorflow, Google, Nvidia, Microsoft, Azure, Amazon, AWS  
+
|keywords=ChatGPT, artificial, intelligence, machine, learning, GPT-4, GPT-5, NLP, NLG, NLC, NLU, models, data, singularity, moonshot, Sentience, AGI, Emergence, Moonshot, Explainable, TensorFlow, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Hugging Face, OpenAI, Tensorflow, OpenAI, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Meta, LLM, metaverse, assistants, agents, digital twin, IoT, Transhumanism, Immersive Reality, Generative AI, Conversational AI, Perplexity, Bing, You, Bard, Ernie, prompt Engineering LangChain, Video/Image, Vision, End-to-End Speech, Synthesize Speech, Speech Recognition, Stanford, MIT |description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools  
+
 
 +
<!-- Google tag (gtag.js) -->
 +
<script async src="https://www.googletagmanager.com/gtag/js?id=G-4GCWLBVJ7T"></script>
 +
<script>
 +
  window.dataLayer = window.dataLayer || [];
 +
  function gtag(){dataLayer.push(arguments);}
 +
  gtag('js', new Date());
 +
 
 +
  gtag('config', 'G-4GCWLBVJ7T');
 +
</script>
 
}}
 
}}
[http://www.youtube.com/results?search_query=Multi-model+Catastrophic+Forgetting YouTube search...]
+
[https://www.youtube.com/results?search_query=lifelong+learning+neural+network+Multi+model YouTube search...]
[http://www.google.com/search?q=Multi-model+Catastrophic+Forgetting ...Google search]
+
[https://www.google.com/search?q=lifelong+learning+neural+network+Multi+model ...Google search]
 +
 
 +
* [[Learning Techniques]]
 +
** [[What is Artificial Intelligence (AI)? | Artificial Intelligence (AI)]] ... [[Generative AI]] ... [[Machine Learning (ML)]] ... [[Deep Learning]] ... [[Neural Network]] ... [[Reinforcement Learning (RL)|Reinforcement]] ... [[Learning Techniques]]
 +
** [[Transfer Learning]]
 +
* [[Artificial General Intelligence (AGI) to Singularity]] ... [[Inside Out - Curious Optimistic Reasoning| Curious Reasoning]] ... [[Emergence]] ... [[Moonshots]] ... [[Explainable / Interpretable AI|Explainable AI]] ...  [[Algorithm Administration#Automated Learning|Automated Learning]]
 +
* [https://www.darpa.mil/program/lifelong-learning-machines Lifelong Learning Machines (L2M) | DARPA]
 +
* [https://www.darpa.mil/news-events/2019-03-12 Progress on Lifelong Learning Machines Shows Potential for Bio-Inspired Algorithms | USC & DARPA]
 +
* [https://www.darpa.mil/news-events/2018-05-03 Researchers Selected to Develop Novel Approaches to Lifelong Machine Learning | DARPA]
 +
* [[Conversational AI]] ... [[ChatGPT]] | [[OpenAI]] ... [[Bing/Copilot]] | [[Microsoft]] ... [[Gemini]] | [[Google]] ... [[Claude]] | [[Anthropic]] ... [[Perplexity]] ... [[You]] ... [[phind]] ... [[Ernie]] | [[Baidu]]
 +
* [[Memory]]
  
* [[Automated Machine Learning (AML) - AutoML]]
 
  
In recent years, researchers have developed deep neural networks that can perform a variety of tasks, including visual recognition and natural language processing (NLP) tasks. Although many of these models achieved remarkable results, they typically only perform well on one particular task due to what is referred to as "catastrophic forgetting."  Essentially, catastrophic forgetting means that when a model that was initially trained on task A is later trained on task B, its performance on task A will significantly decline. [http://techxplore.com/news/2019-03-approach-multi-model-deep-neural-networks.html A new approach to overcome multi-model forgetting in deep neural networks | Ingrid Fadelli]
+
In recent years, researchers have developed deep neural networks that can perform a variety of tasks, including visual recognition and natural language processing (NLP) tasks. Although many of these models achieved remarkable results, they typically only perform well on one particular task due to what is referred to as "catastrophic forgetting."  Essentially, catastrophic forgetting means that when a model that was initially trained on task A is later trained on task B, its performance on task A will significantly decline. [https://techxplore.com/news/2019-03-approach-multi-model-deep-neural-networks.html A new approach to overcome multi-model forgetting in deep neural networks] and [https://techxplore.com/news/2019-03-memory-approach-enable-lifelong.html A generative memory approach to enable lifelong reinforcement learning] | Ingrid Fadelli
  
http://3c1703fe8d.site.internapcdn.net/newman/csz/news/800/2019/11-anewapproach.jpg
+
https://3c1703fe8d.site.internapcdn.net/newman/csz/news/800/2019/11-anewapproach.jpg
  
 
<youtube>6PlvyWQUQu8</youtube>
 
<youtube>6PlvyWQUQu8</youtube>
<youtube>QGFlZfflYYg</youtube>
+
<youtube>c6VpDHoUIjQ</youtube>
 
<youtube>ohyznBhxLow</youtube>
 
<youtube>ohyznBhxLow</youtube>
 +
<youtube>q4A76i6TOCc</youtube>
 +
<youtube>Zp2ufIbiwOs</youtube>
 +
<youtube>qRXPS_6fAfE</youtube>
 +
 +
* [https://techxplore.com/news/2018-10-developmental-approach-sensorimotor-space-enlargement.html A new developmental reinforcement learning approach for sensorimotor space enlargement | Ingrid Fadelli]
 +
https://3c1703fe8d.site.internapcdn.net/newman/csz/news/800/2018/4-anewdevelopm.jpg
 +
 +
== Forgetting ==
 +
[https://www.youtube.com/results?search_query=Catastrophic+Forgetting+neural+networkMulti+model YouTube search...]
 +
[https://www.google.com/search?q=Catastrophic+Forgetting+neural+network+Multi+model ...Google search]
 +
 +
In the quest to build AI that goes beyond today's single-purpose machines, scientists are developing new tools to help AI remember the right things — and forget the rest. [https://www.axios.com/ai-memory-forgetting-336e0525-b4ca-4bec-bed5-745b3d613f65.html Saving AI from catastrophic forgetting | Kaveh Waddell - Axios]
 +
 +
* [https://www.axios.com/memory-forgetting-neuroscience-brain-ebefed70-2d00-4340-a53d-8b58bbb5d522.html Special report: The future of forgetting | Alison Snyder - Axios]
 +
* [https://www.quantamagazine.org/to-remember-the-brain-must-actively-forget-20180724/ To Remember, the Brain Must Actively Forget | Toma Vagner - Quanta Magazine]
 +
* [https://www.nytimes.com/2019/03/22/health/memory-forgetting-psychology.html Can We Get Better at Forgetting? Some things aren’t worth remembering. Science is slowly working out how we might let that stuff go. | Benedict Carey - The New York Times]
 +
* [https://venturebeat.com/2020/02/25/openais-jeff-clune-on-deep-learnings-achilles-heel-and-a-faster-path-to-agi/ OpenAI’s Jeff Clune on deep learning’s Achilles’ heel and a faster path to artificial general intelligence (AGI) - Khari Johnson - VentureBeat]
 +
* [https://en.wikipedia.org/wiki/Catastrophic_interference Catastrophic Interference]
 +
* [https://venturebeat.com/ai/machine-unlearning-the-critical-art-of-teaching-ai-to-forget/ Machine unlearning: The critical art of teaching AI to forget | Matthew Duffin - VentureBeat] ... outlines several different methods for machine unlearning, including sharding and slicing, incremental training, and data deletion. Sharding and slicing divides the data into smaller subsets, which can then be unlearned independently. Incremental training trains the model on a new dataset while keeping the old data in place, gradually removing the old data as it becomes outdated. Data deletion removes specific data points from the model, either manually or automatically. The article concludes by discussing the challenges and opportunities of machine unlearning. One challenge is that it can be computationally expensive to unlearn large models. Another challenge is that it can be difficult to ensure that the unlearning process does not damage the model's accuracy.
 +
 +
 +
=== [https://www.vice.com/en_us/article/evym4m/ai-told-me-human-face-neural-networks Watching AI Slowly Forget a Human Face Is Incredibly Creepy] ===
 +
 +
 +
<youtube>p3HWpBScjpA</youtube>
 +
<youtube>BSTWzl8rbjw</youtube>
 +
<youtube>HMzVi4xWVFQ</youtube>
 +
<youtube>OBkruvl8ih8</youtube>
 
<youtube>5uQ0q0x_Xpk</youtube>
 
<youtube>5uQ0q0x_Xpk</youtube>
<youtube>OBkruvl8ih8</youtube>
+
<youtube>xwV6F_itM4o</youtube>
<youtube>OBkruvl8ih8</youtube>
 

Revision as of 11:46, 16 March 2024

YouTube search... ...Google search


In recent years, researchers have developed deep neural networks that can perform a variety of tasks, including visual recognition and natural language processing (NLP) tasks. Although many of these models achieved remarkable results, they typically only perform well on one particular task due to what is referred to as "catastrophic forgetting." Essentially, catastrophic forgetting means that when a model that was initially trained on task A is later trained on task B, its performance on task A will significantly decline. A new approach to overcome multi-model forgetting in deep neural networks and A generative memory approach to enable lifelong reinforcement learning | Ingrid Fadelli

11-anewapproach.jpg

4-anewdevelopm.jpg

Forgetting

YouTube search... ...Google search

In the quest to build AI that goes beyond today's single-purpose machines, scientists are developing new tools to help AI remember the right things — and forget the rest. Saving AI from catastrophic forgetting | Kaveh Waddell - Axios


Watching AI Slowly Forget a Human Face Is Incredibly Creepy