Difference between revisions of "Deep Q Network (DQN)"
(→Q Learning (DQN)) |
|||
| Line 3: | Line 3: | ||
* [[Deep Reinforcement Learning (DRL)]] | * [[Deep Reinforcement Learning (DRL)]] | ||
| + | * [[Reinforcement Learning (RL)]] | ||
* [[Gaming]] | * [[Gaming]] | ||
* [http://en.wikipedia.org/wiki/Q-learning Wikipedia] | * [http://en.wikipedia.org/wiki/Q-learning Wikipedia] | ||
Revision as of 04:44, 20 September 2018
Q Learning (DQN)
When feedback is provided, it might be long time after the fateful decision has been made. In reality, the feedback is likely to be the result of a large number of prior decisions, taken amid a shifting, uncertain environment. Unlike supervised learning, there are no correct input/output pairs, so suboptimal actions are not explicitly corrected, wrong actions just decrease the corresponding value in the Q-table, meaning there’s less chance choosing the same action should the same state be encountered again. Quora | Jaron Collis
Training deep neural networks to show that a novel end-to-end reinforcement learning agent, termed a deep Q-network (DQN) Human-level control through Deep Reinforcement Learning | Deepmind