Abstract
To effectively collaborate with humans, Artificial Intelligence (AI) systems must understand human behavior and the factors influencing it, including their goals, preferences, and abilities. Interactions with humans are typically costly, and in many real-life situations, AI must adapt to human behavior after only a few interactions. Additionally, when AI interacts with humans to learn about their behavior, the interactions need to be conducted without any noticeable delay for the human, which in turn necessitates adaptation in real-time. This thesis investigates how an AI system can learn about other agents in a sample-efficient and real-time manner, using methods based on reinforcement learning. The first contribution of this thesis is a method for learning representations of goal-driven agents' behaviors with neural networks from incomplete observations, showing that they can be used for improving performance in cooperative decision-making tasks. The second contribution concerns the creation of an automated method for producing task distributions and related ground truth data for training a meta-learner to assess the skill level and adapt quickly to the behavior of a cooperating partner. The third contribution presents a novel method for designing informative experiments for estimating the parameters of simulation-based user models without closed-form likelihood functions, and which models are grounded in cognitive science. This method simultaneously amortizes the estimation of these parameters and the designing of experiments. These contributions cover a wide range of settings where useful representations of behavior for improving cooperation are learned, along with the efficient learning of complex user models. The implications of the methods developed, as well as their strengths and limitations, are discussed.
Translated title of the contribution | Reaaliaikaisia ja näytetehokkaita menetelmiä rationaalisten käyttäjämallien oppimiseen |
---|---|
Original language | English |
Qualification | Doctor's degree |
Awarding Institution |
|
Supervisors/Advisors |
|
Publisher | |
Print ISBNs | 978-952-64-1731-8 |
Electronic ISBNs | 978-952-64-1732-5 |
Publication status | Published - 2024 |
MoE publication type | G5 Doctoral dissertation (article) |
Keywords
- deep learning
- reinforcement learning