Difference between revisions of "Lifelong Learning"

From
Jump to: navigation, search
m
m (Forgetting)
Line 52: Line 52:
 
* [https://venturebeat.com/2020/02/25/openais-jeff-clune-on-deep-learnings-achilles-heel-and-a-faster-path-to-agi/ OpenAI’s Jeff Clune on deep learning’s Achilles’ heel and a faster path to artificial general intelligence (AGI) - Khari Johnson - VentureBeat]
 
* [https://venturebeat.com/2020/02/25/openais-jeff-clune-on-deep-learnings-achilles-heel-and-a-faster-path-to-agi/ OpenAI’s Jeff Clune on deep learning’s Achilles’ heel and a faster path to artificial general intelligence (AGI) - Khari Johnson - VentureBeat]
 
* [https://en.wikipedia.org/wiki/Catastrophic_interference Catastrophic Interference]  
 
* [https://en.wikipedia.org/wiki/Catastrophic_interference Catastrophic Interference]  
 +
* [https://venturebeat.com/ai/machine-unlearning-the-critical-art-of-teaching-ai-to-forget/ Machine unlearning: The critical art of teaching AI to forget | Matthew Duffin - VentureBeat] ... outlines several different methods for machine unlearning, including sharding and slicing, incremental training, and data deletion. Sharding and slicing divides the data into smaller subsets, which can then be unlearned independently. Incremental training trains the model on a new dataset while keeping the old data in place, gradually removing the old data as it becomes outdated. Data deletion removes specific data points from the model, either manually or automatically. The article concludes by discussing the challenges and opportunities of machine unlearning. One challenge is that it can be computationally expensive to unlearn large models. Another challenge is that it can be difficult to ensure that the unlearning process does not damage the model's accuracy.
  
  

Revision as of 21:27, 12 August 2023

YouTube search... ...Google search


In recent years, researchers have developed deep neural networks that can perform a variety of tasks, including visual recognition and natural language processing (NLP) tasks. Although many of these models achieved remarkable results, they typically only perform well on one particular task due to what is referred to as "catastrophic forgetting." Essentially, catastrophic forgetting means that when a model that was initially trained on task A is later trained on task B, its performance on task A will significantly decline. A new approach to overcome multi-model forgetting in deep neural networks and A generative memory approach to enable lifelong reinforcement learning | Ingrid Fadelli

11-anewapproach.jpg

4-anewdevelopm.jpg

Forgetting

YouTube search... ...Google search

In the quest to build AI that goes beyond today's single-purpose machines, scientists are developing new tools to help AI remember the right things — and forget the rest. Saving AI from catastrophic forgetting | Kaveh Waddell - Axios


Watching AI Slowly Forget a Human Face Is Incredibly Creepy