Heterogeneous non-local fusion for multimodal activity recognition

Petr Byvshev, Pascal Mettes, Yu Xiao

Tutkimustuotos: Artikkeli kirjassa/konferenssijulkaisussaConference article in proceedingsScientificvertaisarvioitu

3 Sitaatiot (Scopus)
167 Lataukset (Pure)


In this work, we investigate activity recognition using multimodal inputs from heterogeneous sensors. Activity recognition is commonly tackled from a single-modal perspective using videos. In case multiple signals are used, they come from the same homogeneous modality, e.g. in the case of color and optical flow. Here, we propose an activity network that fuses multimodal inputs coming from completely different and heterogeneous sensors. We frame such a heterogeneous fusion as a non-local operation. The observation is that in a non-local operation, only the channel dimensions need to match. In the network, heterogeneous inputs are fused, while maintaining the shapes and dimensionalities that fit each input. We outline both asymmetric fusion, where one modality serves to enforce the other, and symmetric fusion variants. To further promote research into multimodal activity recognition, we introduce GloVid, a first-person activity dataset captured with video recordings and smart glove sensor readings. Experiments on GloVid show the potential of heterogeneous non-local fusion for activity recognition, outperforming individual modalities and standard fusion techniques.

OtsikkoICMR 2020 - Proceedings of the 2020 International Conference on Multimedia Retrieval
ISBN (elektroninen)9781450370875
DOI - pysyväislinkit
TilaJulkaistu - 8 kesäk. 2020
OKM-julkaisutyyppiA4 Artikkeli konferenssijulkaisussa
TapahtumaACM International Conference on Multimedia Retrieval - Dublin, Irlanti
Kesto: 8 kesäk. 202011 kesäk. 2020
Konferenssinumero: 10


ConferenceACM International Conference on Multimedia Retrieval


Sukella tutkimusaiheisiin 'Heterogeneous non-local fusion for multimodal activity recognition'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.

Siteeraa tätä