Difference between revisions of "Hierarchical Reinforcement Learning (HRL)"

From
Jump to: navigation, search
Line 16: Line 16:
 
** [[State-Action-Reward-State-Action (SARSA)]]
 
** [[State-Action-Reward-State-Action (SARSA)]]
 
** [[Deep Reinforcement Learning (DRL)]] DeepRL
 
** [[Deep Reinforcement Learning (DRL)]] DeepRL
*** [[IMPALA (Importance Weighted Actor-Learner Architecture)]]
 
 
** [[Distributed Deep Reinforcement Learning (DDRL)]]
 
** [[Distributed Deep Reinforcement Learning (DDRL)]]
 
** [[Deep Q Network (DQN)]]
 
** [[Deep Q Network (DQN)]]
 
** [[Evolutionary Computation / Genetic Algorithms]]
 
** [[Evolutionary Computation / Genetic Algorithms]]
** [[Asynchronous Advantage Actor Critic (A3C)]]
+
** [[Actor Critic]]
 
** [[MERLIN]]
 
** [[MERLIN]]
  

Revision as of 16:23, 1 September 2019

Youtube search... ...Google search

Hierarchical reinforcement learning (HRL) is a promising approach to extend traditional Reinforcement Learning (RL) methods to solve more complex tasks.

image44.png

HIerarchical Reinforcement learning with Off-policy correction (HIRO)

HIRO can be used to learn highly complex behaviors for simulated robots, such as pushing objects and utilizing them to reach target locations, learning from only a few million samples, equivalent to a few days of real-time interaction. In comparisons with a number of prior HRL methods.

1*Fq-TQ7Mu2XDOIZ6R7dkRjw.png