Using stacked transformations for recognizing foreign accented speech

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review


Original languageEnglish
Title of host publicationICASSP Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on
Publication statusPublished - 2011
MoE publication typeA4 Article in a conference publication

Publication series

NameIEEE International Conference on Acoustics, Speech and Signal Processing. Proceedings
ISSN (Print)1520-6149


Research units


A common problem in speech recognition for foreign accented speech is that there is not enough training data for an accent-specific or a speaker-specific recognizer. Speaker adaptation can be used to improve the accuracy of a speaker independent recognizer, but a lot of adaptation data is needed for speakers with a strong foreign accent. In this paper we propose a rather simple and successful technique of stacked transformations where the baseline models trained for native speakers are first adapted by using accent-specific data and then by another transformation using speaker-specific data. Because the accent-specific data can be collected offline, the first transformation can be more detailed and comprehensive, and the second one less detailed and fast. Experimental results are provided for speaker adaptation in English spoken by Finnish speakers. The evaluation results confirm that the stacked transformations are very helpful for fast speaker adaptation.

    Research areas

  • automatic speech recognition, cmllr transformation, foreign-accent recognition, stacked transformations

ID: 563345