Difference between revisions of "Apprenticeship Learning - Inverse Reinforcement Learning (IRL)"

From
Jump to: navigation, search
m
Line 13: Line 13:
 
* [[Inside Out - Curious Optimistic Reasoning]]
 
* [[Inside Out - Curious Optimistic Reasoning]]
 
* [[Generative Adversarial Network (GAN)]]
 
* [[Generative Adversarial Network (GAN)]]
 +
* [[Connecting Brains]]
 
* [http://arxiv.org/pdf/1806.06877.pdf A Survey of Inverse Reinforcement Learning: Challenges, Methods and Progress | Saurabh Arora, Prashant Doshi] 18 Jun 2018
 
* [http://arxiv.org/pdf/1806.06877.pdf A Survey of Inverse Reinforcement Learning: Challenges, Methods and Progress | Saurabh Arora, Prashant Doshi] 18 Jun 2018
 
* [http://arxiv.org/pdf/1805.07687.pdf Machine Teaching for Inverse Reinforcement Learning: Algorithms and Applications | Daniel S. Brown, Scott Niekum] 23 Jun 2018
 
* [http://arxiv.org/pdf/1805.07687.pdf Machine Teaching for Inverse Reinforcement Learning: Algorithms and Applications | Daniel S. Brown, Scott Niekum] 23 Jun 2018

Revision as of 08:42, 8 February 2022

YouTube search... ...Google search


Inverse reinforcement learning (IRL) infers/derives a reward function from observed behavior/demonstrations, allowing for policy improvement and generalization. While ordinary "reinforcement learning" involves using rewards and punishments to learn behavior, in IRL the direction is reversed, and a robot observes a person's behavior to figure out what goal that behavior seems to be trying to achieve.