Difference between revisions of "Distributed Deep Reinforcement Learning (DDRL)"

From
Jump to: navigation, search
Line 10: Line 10:
 
* [http://deepmind.com/blog/impala-scalable-distributed-deeprl-dmlab-30/ Importance Weighted Actor-Learner Architectures: Scalable Distributed DeepRL in DMLab-30]
 
* [http://deepmind.com/blog/impala-scalable-distributed-deeprl-dmlab-30/ Importance Weighted Actor-Learner Architectures: Scalable Distributed DeepRL in DMLab-30]
 
* [[Federated]] Learning
 
* [[Federated]] Learning
 +
 
* [[Reinforcement Learning (RL)]]
 
* [[Reinforcement Learning (RL)]]
 
** [[Monte Carlo]] (MC) Method - Model Free Reinforcement Learning
 
** [[Monte Carlo]] (MC) Method - Model Free Reinforcement Learning
Line 20: Line 21:
 
** [[Evolutionary Computation / Genetic Algorithms]]
 
** [[Evolutionary Computation / Genetic Algorithms]]
 
** [[Actor Critic]]
 
** [[Actor Critic]]
 +
*** [[Asynchronous Advantage Actor Critic (A3C)]]
 
*** [[Advanced Actor Critic (A2C)]]
 
*** [[Advanced Actor Critic (A2C)]]
*** [[Asynchronous Advantage Actor Critic (A3C)]]
 
 
*** [[Lifelong Latent Actor-Critic (LILAC)]]
 
*** [[Lifelong Latent Actor-Critic (LILAC)]]
 
** [[Hierarchical Reinforcement Learning (HRL)]]
 
** [[Hierarchical Reinforcement Learning (HRL)]]
 +
 +
  
 
a new, highly scalable agent architecture for distributed training called Importance Weighted Actor-Learner Architecture that uses a new off-policy correction algorithm called V-trace.
 
a new, highly scalable agent architecture for distributed training called Importance Weighted Actor-Learner Architecture that uses a new off-policy correction algorithm called V-trace.
  
 
<youtube>-YMfJLFynmA</youtube>
 
<youtube>-YMfJLFynmA</youtube>

Revision as of 06:19, 6 July 2020