Difference between revisions of "Q Learning"
| Line 8: | Line 8: | ||
[http://www.google.com/search?q=deep+reinforcement+q+learning+machine+learning+ML+artificial+intelligence ...Google search] | [http://www.google.com/search?q=deep+reinforcement+q+learning+machine+learning+ML+artificial+intelligence ...Google search] | ||
| + | * [http://en.wikipedia.org/wiki/Q-learning Q Learning | Wikipedia] | ||
* [[Deep Q Learning (DQN)]] | * [[Deep Q Learning (DQN)]] | ||
* [[Reinforcement Learning (RL)]] | * [[Reinforcement Learning (RL)]] | ||
| Line 14: | Line 15: | ||
* [[Monte Carlo]] | * [[Monte Carlo]] | ||
* [[Gaming]] | * [[Gaming]] | ||
| − | |||
When feedback is provided, it might be long time after the fateful decision has been made. In reality, the feedback is likely to be the result of a large number of prior decisions, taken amid a shifting, uncertain environment. Unlike supervised learning, there are no correct input/output pairs, so suboptimal actions are not explicitly corrected, wrong actions just decrease the corresponding value in the Q-table, meaning there’s less chance choosing the same action should the same state be encountered again. [http://www.quora.com/How-does-Q-learning-work-1 Quora | Jaron Collis] | When feedback is provided, it might be long time after the fateful decision has been made. In reality, the feedback is likely to be the result of a large number of prior decisions, taken amid a shifting, uncertain environment. Unlike supervised learning, there are no correct input/output pairs, so suboptimal actions are not explicitly corrected, wrong actions just decrease the corresponding value in the Q-table, meaning there’s less chance choosing the same action should the same state be encountered again. [http://www.quora.com/How-does-Q-learning-work-1 Quora | Jaron Collis] | ||
Revision as of 12:21, 1 September 2019
Youtube search... ...Google search
- Q Learning | Wikipedia
- Deep Q Learning (DQN)
- Reinforcement Learning (RL)
- Model Free Reinforcement learning algorithms (Monte Carlo, SARSA, Q-learning) | Madhu Sanjeevi (Mady) - Medium
- Monte Carlo
- Gaming
When feedback is provided, it might be long time after the fateful decision has been made. In reality, the feedback is likely to be the result of a large number of prior decisions, taken amid a shifting, uncertain environment. Unlike supervised learning, there are no correct input/output pairs, so suboptimal actions are not explicitly corrected, wrong actions just decrease the corresponding value in the Q-table, meaning there’s less chance choosing the same action should the same state be encountered again. Quora | Jaron Collis
Training deep neural networks to show that a novel end-to-end reinforcement learning agent, termed a deep Q-network (DQN) Human-level control through Deep Reinforcement Learning | Deepmind