Modeling under-resourced languages for speech recognition

Mikko Kurimo, Seppo Enarvi*, Ottokar Tilk, Matti Varjokallio, André Mansikkaniemi, Tanel Alumäe

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

15 Citations (Scopus)
590 Downloads (Pure)

Abstract

One particular problem in large vocabulary continuous speech recognition for low-resourced languages is finding relevant training data for the statistical language models. Large amount of data is required, because models should estimate the probability for all possible word sequences. For Finnish, Estonian and the other fenno-ugric languages a special problem with the data is the huge amount of different word forms that are common in normal speech. The same problem exists also in other language technology applications such as machine translation, information retrieval, and in some extent also in other morphologically rich languages. In this paper we present methods and evaluations in four recent language modeling topics: selecting conversational data from the Internet, adapting models for foreign words, multi-domain and adapted neural network language modeling, and decoding with subword units. Our evaluations show that the same methods work in more than one language and that they scale down to smaller data resources.

Original languageEnglish
Pages (from-to)961-987
Number of pages27
JournalLANGUAGE RESOURCES AND EVALUATION
Volume51
Issue number4
Early online date10 Feb 2016
DOIs
Publication statusPublished - Dec 2017
MoE publication typeA1 Journal article-refereed

Keywords

  • Adaptation
  • Data filtering
  • Large vocabulary speech recognition
  • Statistical language modeling
  • Subword units

Fingerprint

Dive into the research topics of 'Modeling under-resourced languages for speech recognition'. Together they form a unique fingerprint.

Cite this