Difference between revisions of "Hierarchical Reinforcement Learning (HRL)"

From
Jump to: navigation, search
Line 30: Line 30:
  
 
<youtube>x_QjJry0hTc</youtube>
 
<youtube>x_QjJry0hTc</youtube>
<youtube>QEmuhofpFIU</youtube>
 
 
<youtube>zQy02LsARo0</youtube>
 
<youtube>zQy02LsARo0</youtube>
 
<youtube>K5MlmO0UJtI</youtube>
 
<youtube>K5MlmO0UJtI</youtube>

Revision as of 06:17, 6 July 2020

Youtube search... ...Google search


HRL is a promising approach to extend traditional Reinforcement Learning (RL) methods to solve more complex tasks.

image44.png

HIerarchical Reinforcement learning with Off-policy correction (HIRO)

HIRO can be used to learn highly complex behaviors for simulated robots, such as pushing objects and utilizing them to reach target locations, learning from only a few million samples, equivalent to a few days of real-time interaction. In comparisons with a number of prior HRL methods.

1*Fq-TQ7Mu2XDOIZ6R7dkRjw.png