Difference between revisions of "Markov Decision Process (MDP)"
Line 36: | Line 36: | ||
− | <youtube> | + | <youtube>my207WNoeyA</youtube> |
<youtube>jpmZp3eX-wI</youtube> | <youtube>jpmZp3eX-wI</youtube> | ||
<youtube>EqUfuT3CC8s</youtube> | <youtube>EqUfuT3CC8s</youtube> | ||
Line 45: | Line 45: | ||
<youtube>tO6hTI8CXaM</youtube> | <youtube>tO6hTI8CXaM</youtube> | ||
<youtube>PYQAI6Td2wo</youtube> | <youtube>PYQAI6Td2wo</youtube> | ||
+ | <youtube>0o-ui1N35U</youtube> | ||
+ | <youtube>9g32v7bK3Co</youtube> | ||
== (Richard) Bellman Equation == | == (Richard) Bellman Equation == |
Revision as of 07:29, 6 July 2020
Youtube search... ...Google search
- Reinforcement Learning (RL)
- Monte Carlo (MC) Method - Model Free Reinforcement Learning
- Markov Decision Process (MDP)
- State-Action-Reward-State-Action (SARSA)
- Q Learning
- Deep Reinforcement Learning (DRL) DeepRL
- Distributed Deep Reinforcement Learning (DDRL)
- Evolutionary Computation / Genetic Algorithms
- Actor Critic
- Hierarchical Reinforcement Learning (HRL)
Solutions:
Used where outcomes are partly random and partly under the control of a decision maker. MDP is a discrete time stochastic control process. At each time step, the process is in some state s, and the decision maker may choose any action a that is available in state s. The process responds at the next time step by randomly moving into a new state s', and giving the decision maker a corresponding reward R_{a}(s,s')} R_a(s,s'). The probability that the process moves into its new state s' is influenced by the chosen action. Helping the convergence of certain algorithms a discount rate (factor) makes an infinite sum finite.
(Richard) Bellman Equation
- Reinforcement Learning : Markov-Decision Process (Part 1) | Ayush Singh - Towards Data Science
- Reinforcement Learning: Bellman Equation and Optimality (Part 2) | Ayush Singh - Towards Data Science