Difference between revisions of "Markov Decision Process (MDP)"
Line 5: | Line 5: | ||
https://upload.wikimedia.org/wikipedia/commons/thumb/a/ad/Markov_Decision_Process.svg/600px-Markov_Decision_Process.svg.png | https://upload.wikimedia.org/wikipedia/commons/thumb/a/ad/Markov_Decision_Process.svg/600px-Markov_Decision_Process.svg.png | ||
+ | |||
+ | Solutions: | ||
+ | * [http://www.google.com/search?q=Dynamic+Programming+reinforcement+learning&oq=Dynamic+Programming+reinforcement+learning Dynamic Programming] | ||
+ | * [http://www.google.com/search?ei=CpMKW-TXNMbWzgLdhJqIAQ&q=monte+carlo+reinforcement+learning&oq=monte+carlo+reinforcement+learning Monte Carlo] | ||
+ | * [http://www.google.com/search?ei=NJMKW97aLof_zgKM8KSgBA&q=Temporal+Difference+reinforcement+learning Difference Learning] | ||
Used where outcomes are partly random and partly under the control of a decision maker. MDP is a discrete time stochastic control process. At each time step, the process is in some state s, and the decision maker may choose any action a that is available in state s. The process responds at the next time step by randomly moving into a new state s', and giving the decision maker a corresponding reward R_{a}(s,s')} R_a(s,s'). The probability that the process moves into its new state s' is influenced by the chosen action. | Used where outcomes are partly random and partly under the control of a decision maker. MDP is a discrete time stochastic control process. At each time step, the process is in some state s, and the decision maker may choose any action a that is available in state s. The process responds at the next time step by randomly moving into a new state s', and giving the decision maker a corresponding reward R_{a}(s,s')} R_a(s,s'). The probability that the process moves into its new state s' is influenced by the chosen action. |
Revision as of 06:17, 27 May 2018
Solutions:
Used where outcomes are partly random and partly under the control of a decision maker. MDP is a discrete time stochastic control process. At each time step, the process is in some state s, and the decision maker may choose any action a that is available in state s. The process responds at the next time step by randomly moving into a new state s', and giving the decision maker a corresponding reward R_{a}(s,s')} R_a(s,s'). The probability that the process moves into its new state s' is influenced by the chosen action.