Difference between revisions of "Lifelong Learning"

From
Jump to: navigation, search
m
Line 11: Line 11:
 
* [http://www.darpa.mil/news-events/2019-03-12 Progress on Lifelong Learning Machines Shows Potential for Bio-Inspired Algorithms | USC & DARPA]
 
* [http://www.darpa.mil/news-events/2019-03-12 Progress on Lifelong Learning Machines Shows Potential for Bio-Inspired Algorithms | USC & DARPA]
 
* [http://www.darpa.mil/news-events/2018-05-03 Researchers Selected to Develop Novel Approaches to Lifelong Machine Learning | DARPA]
 
* [http://www.darpa.mil/news-events/2018-05-03 Researchers Selected to Develop Novel Approaches to Lifelong Machine Learning | DARPA]
* [[Automated Machine Learning (AML) - AutoML]]
+
* [[Reinforcement Learning (RL)]]
 +
* [[Transfer Learning]]
  
 
In recent years, researchers have developed deep neural networks that can perform a variety of tasks, including visual recognition and natural language processing (NLP) tasks. Although many of these models achieved remarkable results, they typically only perform well on one particular task due to what is referred to as "catastrophic forgetting."  Essentially, catastrophic forgetting means that when a model that was initially trained on task A is later trained on task B, its performance on task A will significantly decline. [http://techxplore.com/news/2019-03-approach-multi-model-deep-neural-networks.html A new approach to overcome multi-model forgetting in deep neural networks] and [https://techxplore.com/news/2019-03-memory-approach-enable-lifelong.html A generative memory approach to enable lifelong reinforcement learning] | Ingrid Fadelli
 
In recent years, researchers have developed deep neural networks that can perform a variety of tasks, including visual recognition and natural language processing (NLP) tasks. Although many of these models achieved remarkable results, they typically only perform well on one particular task due to what is referred to as "catastrophic forgetting."  Essentially, catastrophic forgetting means that when a model that was initially trained on task A is later trained on task B, its performance on task A will significantly decline. [http://techxplore.com/news/2019-03-approach-multi-model-deep-neural-networks.html A new approach to overcome multi-model forgetting in deep neural networks] and [https://techxplore.com/news/2019-03-memory-approach-enable-lifelong.html A generative memory approach to enable lifelong reinforcement learning] | Ingrid Fadelli

Revision as of 05:28, 14 March 2019

YouTube search... ...Google search

In recent years, researchers have developed deep neural networks that can perform a variety of tasks, including visual recognition and natural language processing (NLP) tasks. Although many of these models achieved remarkable results, they typically only perform well on one particular task due to what is referred to as "catastrophic forgetting." Essentially, catastrophic forgetting means that when a model that was initially trained on task A is later trained on task B, its performance on task A will significantly decline. A new approach to overcome multi-model forgetting in deep neural networks and A generative memory approach to enable lifelong reinforcement learning | Ingrid Fadelli

11-anewapproach.jpg

4-anewdevelopm.jpg