Abstract
In human-in-the-loop machine learning, the user provides information beyond that in the training data. Many algorithms and user interfaces have been designed to optimize and facilitate this human--machine interaction; however, fewer studies have addressed the potential defects the designs can cause. Effective interaction often requires exposing the user to the training data or its statistics. The design of the system is then critical, as this can lead to double use of data and overfitting, if the user reinforces noisy patterns in the data. We propose a user modelling methodology, by assuming simple rational behaviour, to correct the problem. We show, in a user study with 48 participants, that the method improves predictive performance in a sparse linear regression sentiment analysis task, where graded user knowledge on feature relevance is elicited. We believe that the key idea of inferring user knowledge with probabilistic user models has general applicability in guarding against overfitting and improving interactive machine learning.
Original language | English |
---|---|
Title of host publication | IUI 2018 - Proceedings of the 23rd International Conference on Intelligent User Interfaces |
Publisher | ACM |
Pages | 305-310 |
Number of pages | 6 |
ISBN (Electronic) | 978-1-4503-4945-1 |
DOIs | |
Publication status | Published - 8 Mar 2018 |
MoE publication type | A4 Article in a conference publication |
Event | International Conference on Intelligent User Interfaces - Tokyo, Japan Duration: 7 Mar 2018 → 11 Mar 2018 Conference number: 23 http://iui.acm.org/2018/index.html |
Conference
Conference | International Conference on Intelligent User Interfaces |
---|---|
Abbreviated title | IUI |
Country | Japan |
City | Tokyo |
Period | 07/03/2018 → 11/03/2018 |
Internet address |
Keywords
- Interactive machine learning
- Probabilistic modeling
- Bayesian Inference
- overfitting
- expert prior elicitation
- human-in-the-loop machine learning