Difference between revisions of "Deep Reinforcement Learning (DRL)"
| Line 2: | Line 2: | ||
* [[Deep Q Learning (DQN)]] | * [[Deep Q Learning (DQN)]] | ||
| + | * [[State-Action-Reward-State-Action (SARSA)]] | ||
| + | * [[Deep Deterministic Policy Gradient (DDPG)]] | ||
| + | * [[Trust Region Policy Optimization (TRPO)]] | ||
| + | * [[Proximal Policy Optimization (PPO)]] | ||
| + | * [[Neural Coreference]] | ||
* [http://gym.openai.com/ Gym | OpenAI] | * [http://gym.openai.com/ Gym | OpenAI] | ||
* [https://towardsdatascience.com/introduction-to-various-reinforcement-learning-algorithms-i-q-learning-sarsa-dqn-ddpg-72a5e0cb6287 Introduction to Various Reinforcement Learning Algorithms. Part I (Q-Learning, SARSA, DQN, DDPG) | Steeve Huang] | * [https://towardsdatascience.com/introduction-to-various-reinforcement-learning-algorithms-i-q-learning-sarsa-dqn-ddpg-72a5e0cb6287 Introduction to Various Reinforcement Learning Algorithms. Part I (Q-Learning, SARSA, DQN, DDPG) | Steeve Huang] | ||
Revision as of 21:13, 26 May 2018
- Deep Q Learning (DQN)
- State-Action-Reward-State-Action (SARSA)
- Deep Deterministic Policy Gradient (DDPG)
- Trust Region Policy Optimization (TRPO)
- Proximal Policy Optimization (PPO)
- Neural Coreference
- Gym | OpenAI
- Introduction to Various Reinforcement Learning Algorithms. Part I (Q-Learning, SARSA, DQN, DDPG) | Steeve Huang
- Introduction to Various Reinforcement Learning Algorithms. Part II (TRPO, PPO) | Steeve Huang
- Guide
Goal-oriented algorithms, which learn how to attain a complex objective (goal) or maximize along a particular dimension over many steps; for example, maximize the points won in a game over many moves. Reinforcement learning solves the difficult problem of correlating immediate actions with the delayed returns they produce. Like humans, reinforcement learning algorithms sometimes have to wait a while to see the fruit of their decisions. They operate in a delayed return environment, where it can be difficult to understand which action leads to which outcome over many time steps.