Abstract
This work studies combination of audio and acceleration sensory streams for automatic classification of user context. Instead of performing sensory fusion at a feature level, we study the combination of classifier output distributions using a number of different classifiers. Performance of the algorithms is evaluated using a data set collected with casually worn mobile phones from a variety of real world environments and user activities. Results from the experiments show that combination of audio and acceleration data enhances classification accuracy of physical activities with all classifiers, whereas environment classification does not benefit notably from acceleration features.
Original language | English |
---|---|
Title of host publication | EUSIPCO The 2011 European Signal Processing Conference (EUSIPCO-2011), Barcelona, Spain, August 29 - September 2, 2011 |
Publication status | Published - 2011 |
MoE publication type | A4 Article in a conference publication |
Keywords
- context classification, machine learning, multimodal signal processing