Difference between revisions of "Hierarchical Reinforcement Learning (HRL)"
| Line 16: | Line 16: | ||
** [[State-Action-Reward-State-Action (SARSA)]] | ** [[State-Action-Reward-State-Action (SARSA)]] | ||
** [[Deep Reinforcement Learning (DRL)]] DeepRL | ** [[Deep Reinforcement Learning (DRL)]] DeepRL | ||
| − | |||
** [[Distributed Deep Reinforcement Learning (DDRL)]] | ** [[Distributed Deep Reinforcement Learning (DDRL)]] | ||
** [[Deep Q Network (DQN)]] | ** [[Deep Q Network (DQN)]] | ||
** [[Evolutionary Computation / Genetic Algorithms]] | ** [[Evolutionary Computation / Genetic Algorithms]] | ||
| − | ** [[ | + | ** [[Actor Critic]] |
** [[MERLIN]] | ** [[MERLIN]] | ||
Revision as of 16:23, 1 September 2019
Youtube search... ...Google search
- The Promise of Hierarchical Reinforcement Learning | Yannis Flet-Berliac - The Gradient
- Hierarchical Reinforcement Learning | David Jardim
- Reinforcement Learning (RL):
- Monte Carlo (MC) Method - Model Free Reinforcement Learning
- Markov Decision Process (MDP)
- Q Learning
- State-Action-Reward-State-Action (SARSA)
- Deep Reinforcement Learning (DRL) DeepRL
- Distributed Deep Reinforcement Learning (DDRL)
- Deep Q Network (DQN)
- Evolutionary Computation / Genetic Algorithms
- Actor Critic
- MERLIN
Hierarchical reinforcement learning (HRL) is a promising approach to extend traditional Reinforcement Learning (RL) methods to solve more complex tasks.
HIerarchical Reinforcement learning with Off-policy correction (HIRO)
- Beyond DQN/A3C: A Survey in Advanced Reinforcement Learning | Joyce Xu - Towards Data Science
- Data-Efficient Hierarchical Reinforcement Learning | O. Nachum, S. Gu, H. Lee, and S. Levine - Google Brain
HIRO can be used to learn highly complex behaviors for simulated robots, such as pushing objects and utilizing them to reach target locations, learning from only a few million samples, equivalent to a few days of real-time interaction. In comparisons with a number of prior HRL methods.