Difference between revisions of "Lifelong Latent Actor-Critic (LILAC)"

From
Jump to: navigation, search
m (Text replacement - "http:" to "https:")
Line 5: Line 5:
 
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools  
 
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools  
 
}}
 
}}
[http://www.youtube.com/results?search_query=Lifelong+Latent+Actor+Critic+LILAC+Reinforcement+Machine+Learning YouTube search...]
+
[https://www.youtube.com/results?search_query=Lifelong+Latent+Actor+Critic+LILAC+Reinforcement+Machine+Learning YouTube search...]
[http://www.google.com/search?q=Lifelong+Latent+Actor+Critic+LILAC+Reinforcement+Machine+Learning ...Google search]
+
[https://www.google.com/search?q=Lifelong+Latent+Actor+Critic+LILAC+Reinforcement+Machine+Learning ...Google search]
  
 
* [[Lifelong Learning]]
 
* [[Lifelong Learning]]
Line 25: Line 25:
  
  
Researchers from [http://ai.stanford.edu/ Stanford AI Lab (SAIL)] have devised a method to deal with data and environments that change over time in a way that outperforms some leading approaches to reinforcement learning. Lifelong Latent Actor-Critic, aka LILAC, uses latent variable models and a maximum entropy policy to leverage past experience for better sample efficiency and performance in dynamic environments. [http://venturebeat.com/2020/07/01/stanford-ai-researchers-introduce-lilac-reinforcement-learning-for-dynamic-environments/ Stanford AI researchers introduce LILAC, reinforcement learning for dynamic environments | Khari Johnson - VentureBeat]
+
Researchers from [https://ai.stanford.edu/ Stanford AI Lab (SAIL)] have devised a method to deal with data and environments that change over time in a way that outperforms some leading approaches to reinforcement learning. Lifelong Latent Actor-Critic, aka LILAC, uses latent variable models and a maximum entropy policy to leverage past experience for better sample efficiency and performance in dynamic environments. [https://venturebeat.com/2020/07/01/stanford-ai-researchers-introduce-lilac-reinforcement-learning-for-dynamic-environments/ Stanford AI researchers introduce LILAC, reinforcement learning for dynamic environments | Khari Johnson - VentureBeat]
  
 
== Continuous Action ==
 
== Continuous Action ==

Revision as of 22:55, 28 March 2023

YouTube search... ...Google search


Researchers from Stanford AI Lab (SAIL) have devised a method to deal with data and environments that change over time in a way that outperforms some leading approaches to reinforcement learning. Lifelong Latent Actor-Critic, aka LILAC, uses latent variable models and a maximum entropy policy to leverage past experience for better sample efficiency and performance in dynamic environments. Stanford AI researchers introduce LILAC, reinforcement learning for dynamic environments | Khari Johnson - VentureBeat

Continuous Action