Difference between revisions of "Distributed Deep Reinforcement Learning (DDRL)"
| Line 20: | Line 20: | ||
** [[Actor Critic]] | ** [[Actor Critic]] | ||
** [[Hierarchical Reinforcement Learning (HRL)]] | ** [[Hierarchical Reinforcement Learning (HRL)]] | ||
| − | |||
| − | |||
a new, highly scalable agent architecture for distributed training called Importance Weighted Actor-Learner Architecture that uses a new off-policy correction algorithm called V-trace. | a new, highly scalable agent architecture for distributed training called Importance Weighted Actor-Learner Architecture that uses a new off-policy correction algorithm called V-trace. | ||
<youtube>-YMfJLFynmA</youtube> | <youtube>-YMfJLFynmA</youtube> | ||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
Revision as of 20:00, 1 September 2019
Youtube search... ...Google search
- IMPALA (Importance Weighted Actor-Learner Architecture)
- Importance Weighted Actor-Learner Architectures: Scalable Distributed DeepRL in DMLab-30
- Reinforcement Learning (RL):
- Monte Carlo (MC) Method - Model Free Reinforcement Learning
- Markov Decision Process (MDP)
- Q Learning
- State-Action-Reward-State-Action (SARSA)
- Deep Reinforcement Learning (DRL) DeepRL
- Deep Q Network (DQN)
- Evolutionary Computation / Genetic Algorithms
- Actor Critic
- Hierarchical Reinforcement Learning (HRL)
a new, highly scalable agent architecture for distributed training called Importance Weighted Actor-Learner Architecture that uses a new off-policy correction algorithm called V-trace.