Difference between revisions of "Hierarchical Reinforcement Learning (HRL)"
m (Text replacement - "http://" to "https://") |
|||
| Line 5: | Line 5: | ||
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools | |description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools | ||
}} | }} | ||
| − | [ | + | [https://www.youtube.com/results?search_query=Hierarchical+Reinforcement+Learning Youtube search...] |
| − | [ | + | [https://www.google.com/search?q=Hierarchical+Reinforcement+machine+learning+ML+artificial+intelligence ...Google search] |
| − | * [ | + | * [https://thegradient.pub/the-promise-of-hierarchical-reinforcement-learning The Promise of Hierarchical Reinforcement Learning | Yannis Flet-Berliac - The Gradient] |
| − | * [ | + | * [https://www.slideshare.net/DavidJardim/hierarchical-reinforcement-learning Hierarchical Reinforcement Learning | David Jardim] |
* [[Reinforcement Learning (RL)]] | * [[Reinforcement Learning (RL)]] | ||
| Line 35: | Line 35: | ||
<youtube>ARfpQzRCWT4</youtube> | <youtube>ARfpQzRCWT4</youtube> | ||
| − | + | https://thegradient.pub/content/images/2019/03/image44.png | |
== HIerarchical Reinforcement learning with Off-policy correction (HIRO) == | == HIerarchical Reinforcement learning with Off-policy correction (HIRO) == | ||
| − | * [ | + | * [https://towardsdatascience.com/advanced-reinforcement-learning-6d769f529eb3 Beyond DQN/A3C: A Survey in Advanced Reinforcement Learning | Joyce Xu - Towards Data Science] |
| − | * [ | + | * [https://arxiv.org/pdf/1805.08296.pdf Data-Efficient Hierarchical Reinforcement Learning | O. Nachum, S. Gu, H. Lee, and S. Levine - Google Brain] |
HIRO can be used to learn highly complex behaviors for simulated robots, such | HIRO can be used to learn highly complex behaviors for simulated robots, such | ||
| Line 46: | Line 46: | ||
<youtube>yLHzDky2ApI</youtube> | <youtube>yLHzDky2ApI</youtube> | ||
| − | + | https://miro.medium.com/max/678/1*Fq-TQ7Mu2XDOIZ6R7dkRjw.png | |
Revision as of 17:05, 28 March 2023
Youtube search... ...Google search
- The Promise of Hierarchical Reinforcement Learning | Yannis Flet-Berliac - The Gradient
- Hierarchical Reinforcement Learning | David Jardim
- Reinforcement Learning (RL)
- Monte Carlo (MC) Method - Model Free Reinforcement Learning
- Markov Decision Process (MDP)
- State-Action-Reward-State-Action (SARSA)
- Q Learning
- Deep Reinforcement Learning (DRL) DeepRL
- Distributed Deep Reinforcement Learning (DDRL)
- Evolutionary Computation / Genetic Algorithms
- Actor Critic
- Hierarchical Reinforcement Learning (HRL)
HRL is a promising approach to extend traditional Reinforcement Learning (RL) methods to solve more complex tasks.
HIerarchical Reinforcement learning with Off-policy correction (HIRO)
- Beyond DQN/A3C: A Survey in Advanced Reinforcement Learning | Joyce Xu - Towards Data Science
- Data-Efficient Hierarchical Reinforcement Learning | O. Nachum, S. Gu, H. Lee, and S. Levine - Google Brain
HIRO can be used to learn highly complex behaviors for simulated robots, such as pushing objects and utilizing them to reach target locations, learning from only a few million samples, equivalent to a few days of real-time interaction. In comparisons with a number of prior HRL methods.