Low Resource Comparison of Attention-based and Hybrid ASR Exploiting wav2vec 2.0

Aku Rouhe*, Anja Virkkunen, Juho Leinonen, Mikko Kurimo

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

4 Citations (Scopus)
240 Downloads (Pure)


Low resource speech recognition can potentially benefit a lot from exploiting a pretrained model such as wav2vec 2.0. These pretrained models have learned useful representations in an unsupervised or self-supervised task, often leveraging a very large corpus of untranscribed speech. The pretrained models can then be used in various ways. In this work we compare two approaches which exploit wav2vec 2.0: an attention-based end-to-end model (AED), where the wav2vec 2.0 model is used in the model encoder, and a hybrid hidden Markov model (HMM/DNN) speech recognition system, where the wav2vec 2.0 model is used in the acoustic model. These approaches are compared in a very difficult Northern Sámi task, as well as an easier, simulated low resource task in Finnish. We find that the wav2vec 2.0 AED models can learn a working attention mechanism, but are still outperformed by wav2vec 2.0 HMM/DNN systems. Our best wav2vec 2.0 HMM/DNN recipe on 20 hours is competitive with an HMM/DNN system trained on 1600 hours.

Original languageEnglish
Title of host publicationProceedings of Interspeech'22
PublisherInternational Speech Communication Association (ISCA)
Number of pages5
Publication statusPublished - 2022
MoE publication typeA4 Conference publication
EventInterspeech - Incheon, Korea, Republic of
Duration: 18 Sept 202222 Sept 2022

Publication series

NameProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
PublisherInternational Speech Communication Association
ISSN (Print)2958-1796


Country/TerritoryKorea, Republic of


  • low resource
  • speech recognition
  • wav2vec 2.0


Dive into the research topics of 'Low Resource Comparison of Attention-based and Hybrid ASR Exploiting wav2vec 2.0'. Together they form a unique fingerprint.

Cite this