Inference of Strategic Behavior based on Incomplete Observation Data

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Details

Original languageEnglish
Title of host publicationNIPS17 Workshop: Learning in the Presence of Strategic Behavior
PublisherCarnegie Mellon University
Number of pages4
StatePublished - 8 Dec 2017
MoE publication typeD3 Professional conference proceedings
EventNeural Information Processing Systems - Long Beach, United States
Duration: 4 Dec 20179 Dec 2017
Conference number: 31

Conference

ConferenceNeural Information Processing Systems
Abbreviated titleNIPS
CountryUnited States
CityLong Beach
Period04/12/201709/12/2017

Researchers

Research units

Abstract

Inferring the goals, preferences and restrictions of strategically behaving agents is a common goal in many situations, and an important requirement for enabling computer systems to better model and understand human users.
Inverse reinforcement learning (IRL) is one method for performing this kind of inference based on observations of the agent's behavior.
However, traditional IRL methods are only applicable when the observations are in the form of state-action paths -- an assumption which does not hold in many real-world modelling settings.
This paper demonstrates that inference is possible even with an arbitrary observation noise model.

    Research areas

  • Inverse reinforcement learning, Bayesian Inference, Approximate Bayesian computation, Monte Carlo simulation

Download statistics

No data available

ID: 16109860