Difference between revisions of "Deep Q Network (DQN)"
| Line 3: | Line 3: | ||
* [[Deep Reinforcement Learning (DRL)]] | * [[Deep Reinforcement Learning (DRL)]] | ||
| − | * [ | + | * [[Gaming]] |
* [http://en.wikipedia.org/wiki/Q-learning Wikipedia] | * [http://en.wikipedia.org/wiki/Q-learning Wikipedia] | ||
Revision as of 19:08, 1 July 2018
Q Learning (DQN)
When feedback is provided, it might be long time after the fateful decision has been made. In reality, the feedback is likely to be the result of a large number of prior decisions, taken amid a shifting, uncertain environment. Unlike supervised learning, there are no correct input/output pairs, so suboptimal actions are not explicitly corrected, wrong actions just decrease the corresponding value in the Q-table, meaning there’s less chance choosing the same action should the same state be encountered again. Quora | Jaron Collis