Normal-to-Lombard adaptation of speech synthesis using long short-term memory recurrent neural networks

Bajibabu Bollepalli, Lauri Juvela, Manu Airaksinen, Cassia Valentini-Botinhao, Paavo Alku

Research output: Contribution to journalArticleScientificpeer-review

9 Citations (Scopus)
89 Downloads (Pure)


In this article, three adaptation methods are compared based on how well they change the speaking style of a neural network based text-to-speech (TTS) voice. The speaking style conversion adopted here is from normal to Lombard speech. The selected adaptation methods are: auxiliary features (AF), learning hidden unit contribution (LHUC), and fine-tuning (FT). Furthermore, four state-of-the-art TTS vocoders are compared in the same context. The evaluated vocoders are: GlottHMM, GlottDNN, STRAIGHT, and pulse model in log-domain (PML). Objective and subjective evaluations were conducted to study the performance of both the adaptation methods and the vocoders. In the subjective evaluations, speaking style similarity and speech intelligibility were assessed. In addition to acoustic model adaptation, phoneme durations were also adapted from normal to Lombard with the FT adaptation method. In objective evaluations and speaking style similarity tests, we found that the FT method outperformed the other two adaptation methods. In speech intelligibility tests, we found that there were no significant differences between vocoders although the PML vocoder showed slightly better performance compared to the three other vocoders.
Original languageEnglish
Pages (from-to)64-75
Number of pages12
JournalSpeech Communication
Publication statusPublished - 1 Jul 2019
MoE publication typeA1 Journal article-refereed


  • Lombard
  • Auxiliary features
  • LHUC
  • Fine-tuning
  • LSTM
  • Adaptation
  • TTS


Dive into the research topics of 'Normal-to-Lombard adaptation of speech synthesis using long short-term memory recurrent neural networks'. Together they form a unique fingerprint.

Cite this