Projects per year
Abstract
Low resource speech recognition can potentially benefit a lot from exploiting a pretrained model such as wav2vec 2.0. These pretrained models have learned useful representations in an unsupervised or self-supervised task, often leveraging a very large corpus of untranscribed speech. The pretrained models can then be used in various ways. In this work we compare two approaches which exploit wav2vec 2.0: an attention-based end-to-end model (AED), where the wav2vec 2.0 model is used in the model encoder, and a hybrid hidden Markov model (HMM/DNN) speech recognition system, where the wav2vec 2.0 model is used in the acoustic model. These approaches are compared in a very difficult Northern Sámi task, as well as an easier, simulated low resource task in Finnish. We find that the wav2vec 2.0 AED models can learn a working attention mechanism, but are still outperformed by wav2vec 2.0 HMM/DNN systems. Our best wav2vec 2.0 HMM/DNN recipe on 20 hours is competitive with an HMM/DNN system trained on 1600 hours.
Original language | English |
---|---|
Title of host publication | Proceedings of Interspeech'22 |
Publisher | International Speech Communication Association |
Pages | 3543-3547 |
Number of pages | 5 |
Publication status | Published - 2022 |
MoE publication type | A4 Article in a conference publication |
Event | Interspeech - Incheon, Korea, Republic of Duration: 18 Sep 2022 → 22 Sep 2022 |
Publication series
Name | Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH |
---|---|
Publisher | International Speech Communication Association |
ISSN (Print) | 2308-457X |
ISSN (Electronic) | 1990-9772 |
Conference
Conference | Interspeech |
---|---|
Country/Territory | Korea, Republic of |
City | Incheon |
Period | 18/09/2022 → 22/09/2022 |
Keywords
- low resource
- speech recognition
- wav2vec 2.0
Fingerprint
Dive into the research topics of 'Low Resource Comparison of Attention-based and Hybrid ASR Exploiting wav2vec 2.0'. Together they form a unique fingerprint.Projects
- 1 Active
-
Understanding speech and scene with ears and eyes
Kurimo, M., Grósz, T. & Virkkunen, A.
01/01/2022 → 31/12/2024
Project: Academy of Finland: Other research funding