Difference between revisions of "Actor Critic"
(→Asynchronous Advantage Actor Critic (A3C)) |
|||
| Line 18: | Line 18: | ||
** [[Evolutionary Computation / Genetic Algorithms]] | ** [[Evolutionary Computation / Genetic Algorithms]] | ||
** Actor Critic | ** Actor Critic | ||
| − | *** [[Advanced Actor Critic (A2C]] | + | *** [[Advanced Actor Critic (A2C)]] |
*** [[Asynchronous Advantage Actor Critic (A3C)]] | *** [[Asynchronous Advantage Actor Critic (A3C)]] | ||
*** [[Lifelong Latent Actor-Critic (LILAC)]] | *** [[Lifelong Latent Actor-Critic (LILAC)]] | ||
Revision as of 11:41, 3 July 2020
YouTube search... ...Google search
- Reinforcement Learning (RL):
- Monte Carlo (MC) Method - Model Free Reinforcement Learning
- Markov Decision Process (MDP)
- Q Learning
- State-Action-Reward-State-Action (SARSA)
- Deep Reinforcement Learning (DRL) DeepRL
- Distributed Deep Reinforcement Learning (DDRL)
- Deep Q Network (DQN)
- Evolutionary Computation / Genetic Algorithms
- Actor Critic
- Hierarchical Reinforcement Learning (HRL)
- Beyond DQN/A3C: A Survey in Advanced Reinforcement Learning | Joyce Xu - Towards Data Science
- Policy Gradient (PG)
Policy gradients and Deep Q Network (DQN) can only get us so far, but what if we used two networks to help train and AI instead of one? Thats the idea behind actor critic algorithms.