Low Resource Comparison of Attention-based and Hybrid ASR Exploiting wav2vec 2.0

Aku Rouhe*, Anja Virkkunen, Juho Leinonen, Mikko Kurimo

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

51 Downloads (Pure)

Abstract

Low resource speech recognition can potentially benefit a lot from exploiting a pretrained model such as wav2vec 2.0. These pretrained models have learned useful representations in an unsupervised or self-supervised task, often leveraging a very large corpus of untranscribed speech. The pretrained models can then be used in various ways. In this work we compare two approaches which exploit wav2vec 2.0: an attention-based end-to-end model (AED), where the wav2vec 2.0 model is used in the model encoder, and a hybrid hidden Markov model (HMM/DNN) speech recognition system, where the wav2vec 2.0 model is used in the acoustic model. These approaches are compared in a very difficult Northern Sámi task, as well as an easier, simulated low resource task in Finnish. We find that the wav2vec 2.0 AED models can learn a working attention mechanism, but are still outperformed by wav2vec 2.0 HMM/DNN systems. Our best wav2vec 2.0 HMM/DNN recipe on 20 hours is competitive with an HMM/DNN system trained on 1600 hours.

Original languageEnglish
Title of host publicationProceedings of Interspeech'22
PublisherInternational Speech Communication Association
Pages3543-3547
Number of pages5
Publication statusPublished - 2022
MoE publication typeA4 Article in a conference publication
EventInterspeech - Incheon, Korea, Republic of
Duration: 18 Sep 202222 Sep 2022

Publication series

NameProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
PublisherInternational Speech Communication Association
ISSN (Print)2308-457X
ISSN (Electronic)1990-9772

Conference

ConferenceInterspeech
Country/TerritoryKorea, Republic of
CityIncheon
Period18/09/202222/09/2022

Keywords

  • low resource
  • speech recognition
  • wav2vec 2.0

Fingerprint

Dive into the research topics of 'Low Resource Comparison of Attention-based and Hybrid ASR Exploiting wav2vec 2.0'. Together they form a unique fingerprint.

Cite this