Normal-to-Lombard adaptation of speech synthesis using long short-term memory recurrent neural networks

Bajibabu Bollepalli, Lauri Juvela, Manu Airaksinen, Cassia Valentini-Botinhao, Paavo Alku

    Research output: Contribution to journalArticleScientificpeer-review

    10 Citations (Scopus)
    95 Downloads (Pure)

    Abstract

    In this article, three adaptation methods are compared based on how well they change the speaking style of a neural network based text-to-speech (TTS) voice. The speaking style conversion adopted here is from normal to Lombard speech. The selected adaptation methods are: auxiliary features (AF), learning hidden unit contribution (LHUC), and fine-tuning (FT). Furthermore, four state-of-the-art TTS vocoders are compared in the same context. The evaluated vocoders are: GlottHMM, GlottDNN, STRAIGHT, and pulse model in log-domain (PML). Objective and subjective evaluations were conducted to study the performance of both the adaptation methods and the vocoders. In the subjective evaluations, speaking style similarity and speech intelligibility were assessed. In addition to acoustic model adaptation, phoneme durations were also adapted from normal to Lombard with the FT adaptation method. In objective evaluations and speaking style similarity tests, we found that the FT method outperformed the other two adaptation methods. In speech intelligibility tests, we found that there were no significant differences between vocoders although the PML vocoder showed slightly better performance compared to the three other vocoders.
    Original languageEnglish
    Pages (from-to)64-75
    Number of pages12
    JournalSpeech Communication
    Volume110
    DOIs
    Publication statusPublished - 1 Jul 2019
    MoE publication typeA1 Journal article-refereed

    Keywords

    • Lombard
    • Auxiliary features
    • LHUC
    • Fine-tuning
    • LSTM
    • Adaptation
    • TTS

    Fingerprint

    Dive into the research topics of 'Normal-to-Lombard adaptation of speech synthesis using long short-term memory recurrent neural networks'. Together they form a unique fingerprint.

    Cite this