Difference between revisions of "Lifelong Latent Actor-Critic (LILAC)"

From
Jump to: navigation, search
Line 9: Line 9:
  
 
* [[Lifelong Learning]]
 
* [[Lifelong Learning]]
* [[Reinforcement Learning (RL)]]:
+
* [[Reinforcement Learning (RL)]]
 
** [[Monte Carlo]] (MC) Method - Model Free Reinforcement Learning
 
** [[Monte Carlo]] (MC) Method - Model Free Reinforcement Learning
 
** [[Markov Decision Process (MDP)]]
 
** [[Markov Decision Process (MDP)]]
 +
** [[State-Action-Reward-State-Action (SARSA)]]
 
** [[Q Learning]]
 
** [[Q Learning]]
** [[State-Action-Reward-State-Action (SARSA)]]
+
*** [[Deep Q Network (DQN)]]
 
** [[Deep Reinforcement Learning (DRL)]] DeepRL
 
** [[Deep Reinforcement Learning (DRL)]] DeepRL
 
** [[Distributed Deep Reinforcement Learning (DDRL)]]
 
** [[Distributed Deep Reinforcement Learning (DDRL)]]
** [[Deep Q Network (DQN)]]
 
 
** [[Evolutionary Computation / Genetic Algorithms]]
 
** [[Evolutionary Computation / Genetic Algorithms]]
 
** [[Actor Critic]]
 
** [[Actor Critic]]
 +
*** [[Asynchronous Advantage Actor Critic (A3C)]]
 
*** [[Advanced Actor Critic (A2C)]]
 
*** [[Advanced Actor Critic (A2C)]]
*** [[Asynchronous Advantage Actor Critic (A3C)]]
 
 
*** Lifelong Latent Actor-Critic (LILAC)
 
*** Lifelong Latent Actor-Critic (LILAC)
 
** [[Hierarchical Reinforcement Learning (HRL)]]
 
** [[Hierarchical Reinforcement Learning (HRL)]]
 +
  
 
Researchers from [http://ai.stanford.edu/ Stanford AI Lab (SAIL)] have devised a method to deal with data and environments that change over time in a way that outperforms some leading approaches to reinforcement learning. Lifelong Latent Actor-Critic, aka LILAC, uses latent variable models and a maximum entropy policy to leverage past experience for better sample efficiency and performance in dynamic environments. [http://venturebeat.com/2020/07/01/stanford-ai-researchers-introduce-lilac-reinforcement-learning-for-dynamic-environments/ Stanford AI researchers introduce LILAC, reinforcement learning for dynamic environments | Khari Johnson - VentureBeat]
 
Researchers from [http://ai.stanford.edu/ Stanford AI Lab (SAIL)] have devised a method to deal with data and environments that change over time in a way that outperforms some leading approaches to reinforcement learning. Lifelong Latent Actor-Critic, aka LILAC, uses latent variable models and a maximum entropy policy to leverage past experience for better sample efficiency and performance in dynamic environments. [http://venturebeat.com/2020/07/01/stanford-ai-researchers-introduce-lilac-reinforcement-learning-for-dynamic-environments/ Stanford AI researchers introduce LILAC, reinforcement learning for dynamic environments | Khari Johnson - VentureBeat]

Revision as of 07:16, 6 July 2020

YouTube search... ...Google search


Researchers from Stanford AI Lab (SAIL) have devised a method to deal with data and environments that change over time in a way that outperforms some leading approaches to reinforcement learning. Lifelong Latent Actor-Critic, aka LILAC, uses latent variable models and a maximum entropy policy to leverage past experience for better sample efficiency and performance in dynamic environments. Stanford AI researchers introduce LILAC, reinforcement learning for dynamic environments | Khari Johnson - VentureBeat

Continuous Action