Inference of Strategic Behavior based on Incomplete Observation Data

Research output: Chapter in Book/Report/Conference proceedingConference contributionProfessional


Original languageEnglish
Title of host publicationNIPS17 Workshop: Learning in the Presence of Strategic Behavior
Publication statusPublished - 8 Dec 2017
MoE publication typeD3 Professional conference proceedings
EventIEEE Conference on Neural Information Processing Systems - Long Beach, United States
Duration: 4 Dec 20179 Dec 2017
Conference number: 31


ConferenceIEEE Conference on Neural Information Processing Systems
Abbreviated titleNIPS
CountryUnited States
CityLong Beach


Research units


Inferring the goals, preferences and restrictions of strategically behaving agents is a common goal in many situations, and an important requirement for enabling computer systems to better model and understand human users.
Inverse reinforcement learning (IRL) is one method for performing this kind of inference based on observations of the agent's behavior.
However, traditional IRL methods are only applicable when the observations are in the form of state-action paths -- an assumption which does not hold in many real-world modelling settings.
This paper demonstrates that inference is possible even with an arbitrary observation noise model.

    Research areas

  • Inverse reinforcement learning, Bayesian Inference, Approximate Bayesian computation, Monte Carlo simulation

Download statistics

No data available

ID: 16109860