Principled Comparisons for End-to-End Speech Recognition: Attention vs Hybrid at the 1000-hour Scale

Research output: Contribution to journalArticleScientificpeer-review

1 Citation (Scopus)
42 Downloads (Pure)

Abstract

End-to-End speech recognition has become the center of attention for speech recognition research, but Hybrid Hidden Markov Model Deep Neural Network (HMM/DNN) -systems remain a competitive approach in terms of performance. End-to-End models may be better at very large data scales, and HMM / DNN-systems may have an advantage in low-resource scenarios, but the thousand-hour scale is particularly interesting for comparisons. At that scale experiments have not been able to conclusively demonstrate which approach is best, or if the heterogeneous approaches yield similar results. In this work, we work towards answering that question for Attention-based Encoder-Decoder models compared with HMM / DNN-systems. We present two simple experimental design principles, and how to build systems adhering to those principles. We demonstrate how those principles remove confounding variables related to both data, and neural architecture and training. We apply the principles in a set of experiments on three diverse thousand-hour-scale tasks. In our experiments, the HMM / DNN-systems yield equal or better results in almost all cases.
Original languageEnglish
Pages (from-to)623-638
Number of pages16
JournalIEEE/ACM Transactions on Audio, Speech, and Language Processing
Volume32
Early online date24 Nov 2023
DOIs
Publication statusPublished - 2024
MoE publication typeA1 Journal article-refereed

Keywords

  • ASR
  • HMM/DNN
  • End-to-End

Fingerprint

Dive into the research topics of 'Principled Comparisons for End-to-End Speech Recognition: Attention vs Hybrid at the 1000-hour Scale'. Together they form a unique fingerprint.

Cite this