Lombard speech synthesis using transfer learning in a Tacotron text-to-speech system

Bajibabu Bollepalli, Lauri Juvela, Paavo Alku

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

15 Citations (Scopus)
690 Downloads (Pure)

Abstract

Currently, there is increasing interest to use sequence-to-sequence models in text-to-speech (TTS) synthesis with attention like that in Tacotron models. These models are end-to-end, meaning that they learn both co-articulation and duration properties directly from text and speech. Since these models are entirely data-driven, they need large amounts of data to generate synthetic speech of good quality. However, in challenging speaking styles, such as Lombard speech, it is difficult to record sufficiently large speech corpora. Therefore, we propose a transfer learning method to adapt a TTS system of normal speaking style to Lombard style. We also experiment with a WaveNet vocoder along with a traditional vocoder (WORLD) in the synthesis of Lombard speech. The subjective and objective evaluation results indicated that the proposed adaptation system coupled with the WaveNet vocoder clearly outperformed the conventional deep neural network based TTS system in the synthesis of Lombard speech
Original languageEnglish
Title of host publicationProceedings of Interspeech
PublisherInternational Speech Communication Association (ISCA)
Pages2833-2837
DOIs
Publication statusPublished - 2019
MoE publication typeA4 Conference publication
EventInterspeech - Graz, Austria
Duration: 15 Sept 201919 Sept 2019
https://www.interspeech2019.org/

Publication series

NameInterspeech - Annual Conference of the International Speech Communication Association
ISSN (Electronic)2308-457X

Conference

ConferenceInterspeech
Country/TerritoryAustria
CityGraz
Period15/09/201919/09/2019
Internet address

Keywords

  • Adaptation
  • Lombard speaking style
  • Tacotron
  • Text-To-Speech (TTS)

Fingerprint

Dive into the research topics of 'Lombard speech synthesis using transfer learning in a Tacotron text-to-speech system'. Together they form a unique fingerprint.

Cite this