Difference between revisions of "Lifelong Latent Actor-Critic (LILAC)"
| Line 8: | Line 8: | ||
[http://www.google.com/search?q=Lifelong+Latent+Actor+Critic+LILAC+Reinforcement+Machine+Learning ...Google search] | [http://www.google.com/search?q=Lifelong+Latent+Actor+Critic+LILAC+Reinforcement+Machine+Learning ...Google search] | ||
| + | * [[Lifelong Learning]] | ||
* [[Reinforcement Learning (RL)]]: | * [[Reinforcement Learning (RL)]]: | ||
** [[Monte Carlo]] (MC) Method - Model Free Reinforcement Learning | ** [[Monte Carlo]] (MC) Method - Model Free Reinforcement Learning | ||
Revision as of 14:55, 3 July 2020
YouTube search... ...Google search
- Lifelong Learning
- Reinforcement Learning (RL):
- Monte Carlo (MC) Method - Model Free Reinforcement Learning
- Markov Decision Process (MDP)
- Q Learning
- State-Action-Reward-State-Action (SARSA)
- Deep Reinforcement Learning (DRL) DeepRL
- Distributed Deep Reinforcement Learning (DDRL)
- Deep Q Network (DQN)
- Evolutionary Computation / Genetic Algorithms
- Actor Critic
- Advanced Actor Critic (A2C)
- Asynchronous Advantage Actor Critic (A3C)
- Lifelong Latent Actor-Critic (LILAC)
- Hierarchical Reinforcement Learning (HRL)
Researchers from Stanford AI Lab (SAIL) have devised a method to deal with data and environments that change over time in a way that outperforms some leading approaches to reinforcement learning. Lifelong Latent Actor-Critic, aka LILAC, uses latent variable models and a maximum entropy policy to leverage past experience for better sample efficiency and performance in dynamic environments. Stanford AI researchers introduce LILAC, reinforcement learning for dynamic environments | Khari Johnson - VentureBeat