Deep Distributed Q Network Partial Observability

From
Jump to: navigation, search

Youtube search... ...Google search

It is possible to design a Partially Observable Markov Decision Process (MDP) which calculates the decision making processes of two agents. A multi-agent reinforcement learning procedure calculates the decision process. For example if you want to calculate the probability for the decision of a person you introduce a random choice of one of the two roles or agents. This introduction is called the Harsanyi transformation. Two Partially Observable Markov Decision Processes are compatible if any policy for one of the agents is a policy for the other. Zahra M.M.A. Sadiq