Difference between revisions of "Deep Q Network (DQN)"
(Created page with "== Q Learning (DQN) == [http://www.youtube.com/results?search_query=deep+reinforcement+q+learning+artificial+intelligence+ Youtube search...] * [http://medium.freecodecamp.or...") |
|||
Line 2: | Line 2: | ||
[http://www.youtube.com/results?search_query=deep+reinforcement+q+learning+artificial+intelligence+ Youtube search...] | [http://www.youtube.com/results?search_query=deep+reinforcement+q+learning+artificial+intelligence+ Youtube search...] | ||
+ | * [[Deep Reinforcement Learning]] | ||
* [http://medium.freecodecamp.org/an-introduction-to-deep-q-learning-lets-play-doom-54d02d8017d8 An introduction to Deep Q-Learning: let’s play Doom] | * [http://medium.freecodecamp.org/an-introduction-to-deep-q-learning-lets-play-doom-54d02d8017d8 An introduction to Deep Q-Learning: let’s play Doom] | ||
* [http://en.wikipedia.org/wiki/Q-learning Wikipedia] | * [http://en.wikipedia.org/wiki/Q-learning Wikipedia] |
Revision as of 06:56, 18 May 2018
Q Learning (DQN)
When feedback is provided, it might be long time after the fateful decision has been made. In reality, the feedback is likely to be the result of a large number of prior decisions, taken amid a shifting, uncertain environment. Unlike supervised learning, there are no correct input/output pairs, so suboptimal actions are not explicitly corrected, wrong actions just decrease the corresponding value in the Q-table, meaning there’s less chance choosing the same action should the same state be encountered again. Quora | Jaron Collis