Difference between revisions of "Hierarchical Reinforcement Learning (HRL)"

From
Jump to: navigation, search
Line 20: Line 20:
 
** [[Evolutionary Computation / Genetic Algorithms]]
 
** [[Evolutionary Computation / Genetic Algorithms]]
 
** [[Actor Critic]]
 
** [[Actor Critic]]
 +
*** [[Advanced Actor Critic (A2C)]]
 +
*** [[Asynchronous Advantage Actor Critic (A3C)]]
 +
*** [[Lifelong Latent Actor-Critic (LILAC)]]
 +
** Hierarchical Reinforcement Learning (HRL)
  
 
Hierarchical reinforcement learning (HRL) is a promising approach to extend
 
Hierarchical reinforcement learning (HRL) is a promising approach to extend

Revision as of 11:51, 3 July 2020

Youtube search... ...Google search

Hierarchical reinforcement learning (HRL) is a promising approach to extend traditional Reinforcement Learning (RL) methods to solve more complex tasks.

image44.png

HIerarchical Reinforcement learning with Off-policy correction (HIRO)

HIRO can be used to learn highly complex behaviors for simulated robots, such as pushing objects and utilizing them to reach target locations, learning from only a few million samples, equivalent to a few days of real-time interaction. In comparisons with a number of prior HRL methods.

1*Fq-TQ7Mu2XDOIZ6R7dkRjw.png