Abstract
Inferring the goals, preferences and restrictions of strategically behaving agents is a common goal in many situations, and an important requirement for enabling computer systems to better model and understand human users.
Inverse reinforcement learning (IRL) is one method for performing this kind of inference based on observations of the agent's behavior.
However, traditional IRL methods are only applicable when the observations are in the form of state-action paths -- an assumption which does not hold in many real-world modelling settings.
This paper demonstrates that inference is possible even with an arbitrary observation noise model.
Inverse reinforcement learning (IRL) is one method for performing this kind of inference based on observations of the agent's behavior.
However, traditional IRL methods are only applicable when the observations are in the form of state-action paths -- an assumption which does not hold in many real-world modelling settings.
This paper demonstrates that inference is possible even with an arbitrary observation noise model.
Original language | English |
---|---|
Title of host publication | NIPS17 Workshop: Learning in the Presence of Strategic Behavior |
Publisher | Carnegie Mellon University |
Number of pages | 4 |
Publication status | Published - 8 Dec 2017 |
MoE publication type | D3 Professional conference proceedings |
Event | IEEE Conference on Neural Information Processing Systems - Long Beach, United States Duration: 4 Dec 2017 → 9 Dec 2017 Conference number: 31 |
Conference
Conference | IEEE Conference on Neural Information Processing Systems |
---|---|
Abbreviated title | NIPS |
Country | United States |
City | Long Beach |
Period | 04/12/2017 → 09/12/2017 |
Keywords
- Inverse reinforcement learning
- Bayesian Inference
- Approximate Bayesian computation
- Monte Carlo simulation