Abstract
Several computational approaches have been proposed for inferring the affective state of the user, motivated for example by the goal of building improved interfaces that can adapt to the user's needs and internal state. While fairly good results have been obtained for inferring the user state under highly controlled conditions, a considerable amount of work remains to be done for learning high-quality estimates of subjective evaluations of the state in more natural conditions. In this work, we discuss how two recent machine learning concepts, multi-view learning and multi-task learning, can be adapted for user state recognition, and demonstrate them on two data collections of varying quality. Multi-view learning enables combining multiple measurement sensors in a justified way while automatically learning the importance of each sensor. Multi-task learning, in turn, tells how multiple learning tasks can be learned together to improve the accuracy. We demonstrate the use of two types of multi-task learning: learning both multiple state indicators and models for multiple users together. We also illustrate how the benefits of multi-task learning and multi-view learning can be effectively combined in a unified model by introducing a novel algorithm. (C) 2014 Elsevier B.V. All rights reserved.
Original language | English |
---|---|
Pages (from-to) | 97-106 |
Number of pages | 10 |
Journal | Neurocomputing |
Volume | 139 |
DOIs | |
Publication status | Published - 2 Sept 2014 |
MoE publication type | A1 Journal article-refereed |
Keywords
- Affect recognition
- Machine learning
- Multi-task learning
- Multi-view learning
- AFFECT RECOGNITION
- EMOTION
- MODEL
- ENVIRONMENTS