Projects per year
Abstract
Self-supervised models, such as HuBERT and its audio-visual version AV-HuBERT, have demonstrated excellent performance on various tasks. The main factor for their success is the pre-training procedure, which requires only raw data without human transcription. During the self-supervised pre-training phase, HuBERT is trained to discover latent clusters in the training data, but these clusters are discarded, and only the last hidden layer is used by the conventional finetuning step. We investigate what latent information the AV-HuBERT model managed to uncover via its clusters and can we use them directly for speech recognition. To achieve this, we consider the sequence of cluster ids as a'language' developed by the AV-HuBERT and attempt to translate it to English text via small LSTM-based models. These translation models enable us to investigate the relations between the clusters and the English alphabet, shedding light on groups of latent clusters specialized to recognise specific phonetic groups. Our results demonstrate that using the pre-trained system as a quantizer, we are able to compress the video to as low as 275 bit/sec while maintaining acceptable speech recognition accuracy. Furthermore, compared to the conventional finetuning step, our solution has considerably lower computational cost.
Original language | English |
---|---|
Title of host publication | 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024 - Proceedings |
Publisher | IEEE |
Pages | 11196-11200 |
Number of pages | 5 |
ISBN (Electronic) | 979-8-3503-4485-1 |
DOIs | |
Publication status | Published - 2024 |
MoE publication type | A4 Conference publication |
Event | IEEE International Conference on Acoustics, Speech and Signal Processing - Seoul, Korea, Republic of, Seoul, Korea, Republic of Duration: 14 Apr 2024 → 19 Apr 2024 |
Publication series
Name | ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings |
---|---|
ISSN (Print) | 1520-6149 |
Conference
Conference | IEEE International Conference on Acoustics, Speech and Signal Processing |
---|---|
Abbreviated title | ICASSP |
Country/Territory | Korea, Republic of |
City | Seoul |
Period | 14/04/2024 → 19/04/2024 |
Keywords
- ASR
- audiovisual
- AV-HuBERT
- machine translation
- SSL
Fingerprint
Dive into the research topics of 'INVESTIGATING THE CLUSTERS DISCOVERED BY PRE-TRAINED AV-HUBERT'. Together they form a unique fingerprint.-
USSEE: Understanding Speech and Scene with Ears and Eyes
Kurimo, M., Virkkunen, A. & Grósz, T.
01/01/2022 → 31/12/2024
Project: Academy of Finland: Other research funding
-
-: Finnish Center for Artificial Intelligence
01/01/2019 → 31/12/2022
Project: Academy of Finland: Other research funding