Modern subword-based models for automatic speech recognition

Peter Smit

Research output: ThesisDoctoral ThesisCollection of Articles

Abstract

In today's society, speech recognition systems have reached a mass audience, especially in the field of personal assistants such as Amazon Alexa or Google Home. Yet, this does not mean that speech recognition has been solved. On the contrary, for many domains, tasks, and languages such systems do not exist. Subword-based automatic speech recognition has been studied in the past for many reasons, often to overcome limitations on the size of the vocabulary. Specifically for agglutinative languages, where new words can be created on the fly, handling these limitations is possible using a subword-based automatic speech recognition (ASR) system. Though, over time subword-based systems lost a bit of popularity as system resources increased and word-based models with large vocabularies became possible. Still, subword-based models in modern ASR systems can predict words that have never been seen before and better use the available language modeling resources. Furthermore, subword models have smaller vocabularies, which makes neural network language models (NNLMs) easier to train and use. Hence, in this thesis, we study subword models for ASR and make two major contributions. First, this thesis reintroduces subword-based modeling in a modern framework based on weighted finite-state transducers and describe the necessary tools for making a sound and effective system. It does this through careful modification of the lexicon FST part of a WFST-based recognizer. Secondly, extensive experiments using are done using subwords, with different types of language models including n-gram models and NNLMs. These experiments are performed on six different languages setting the new best-published result for any of these datasets. Overall, we show that subword-based models can outperform word-based models in terms of ASR performance for many different types of languages. This thesis also details design choices needed when building modern subword ASR systems, including the choices of the segmentation algorithm, vocabulary size and subword marking style. In addition, it includes techniques to combine speech recognition models trained on different units through system combination. Lastly, it evaluates the use of the smallest possible subword unit; characters and shows that these models can be smaller and yet be competitive to word-based models.
Translated title of the contributionModern subword-based models for automatic speech recognition
Original languageEnglish
QualificationDoctor's degree
Awarding Institution
  • Aalto University
Supervisors/Advisors
  • Kurimo, Mikko, Supervising Professor
  • Virpioja, Sami, Thesis Advisor
Publisher
Print ISBNs978-952-60-8565-4
Electronic ISBNs978-952-60-8566-1
Publication statusPublished - 2019
MoE publication typeG5 Doctoral dissertation (article)

Keywords

  • automatic speech recognition
  • language modeling
  • subword models

Fingerprint

Dive into the research topics of 'Modern subword-based models for automatic speech recognition'. Together they form a unique fingerprint.

Cite this