Inference of Strategic Behavior based on Incomplete Observation Data

Antti Kangasrääsiö, Samuel Kaski

Research output: Chapter in Book/Report/Conference proceedingConference contributionProfessional

89 Downloads (Pure)

Abstract

Inferring the goals, preferences and restrictions of strategically behaving agents is a common goal in many situations, and an important requirement for enabling computer systems to better model and understand human users.
Inverse reinforcement learning (IRL) is one method for performing this kind of inference based on observations of the agent's behavior.
However, traditional IRL methods are only applicable when the observations are in the form of state-action paths -- an assumption which does not hold in many real-world modelling settings.
This paper demonstrates that inference is possible even with an arbitrary observation noise model.
Original languageEnglish
Title of host publicationNIPS17 Workshop: Learning in the Presence of Strategic Behavior
PublisherCARNEGIE MELLON UNIVERSITY
Number of pages4
Publication statusPublished - 8 Dec 2017
MoE publication typeD3 Professional conference proceedings
EventIEEE Conference on Neural Information Processing Systems - Long Beach, United States
Duration: 4 Dec 20179 Dec 2017
Conference number: 31

Conference

ConferenceIEEE Conference on Neural Information Processing Systems
Abbreviated titleNIPS
Country/TerritoryUnited States
CityLong Beach
Period04/12/201709/12/2017

Keywords

  • Inverse reinforcement learning
  • Bayesian Inference
  • Approximate Bayesian computation
  • Monte Carlo simulation

Fingerprint

Dive into the research topics of 'Inference of Strategic Behavior based on Incomplete Observation Data'. Together they form a unique fingerprint.

Cite this