Waveform generation for text-to-speech synthesis using pitch-synchronous multi-scale generative adversarial networks

Lauri Juvela, Bajibabu Bollepalli, Junichi Yamagishi, Paavo Alku

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

17 Citations (Scopus)
254 Downloads (Pure)

Abstract

The state-of-the-art in text-to-speech (TTS) synthesis has recently improved considerably due to novel neural waveform generation methods, such as WaveNet. However, these methods suffer from their slow sequential inference process, while their parallel versions are difficult to train and even more computationally expensive. Meanwhile, generative adversarial networks (GANs) have achieved impressive results in image generation and are making their way into audio applications; parallel inference is among their lucrative properties. By adopting recent advances in GAN training techniques, this investigation studies waveform generation for TTS in two domains (speech signal and glottal excitation). Listening test results show that while direct waveform generation with GAN is still far behind WaveNet, a GAN-based glottal excitation model can achieve quality and voice similarity on par with a WaveNet vocoder.
Original languageEnglish
Title of host publicationICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
PublisherIEEE
Pages6915 - 6919
Number of pages5
ISBN (Electronic)978-1-4799-8131-1
ISBN (Print)978-1-4799-8132-8
DOIs
Publication statusPublished - 1 May 2019
MoE publication typeA4 Conference publication
EventIEEE International Conference on Acoustics, Speech, and Signal Processing - Brighton, United Kingdom
Duration: 12 May 201917 May 2019
Conference number: 44

Publication series

NameProceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing
ISSN (Print)1520-6149
ISSN (Electronic)2379-190X

Conference

ConferenceIEEE International Conference on Acoustics, Speech, and Signal Processing
Abbreviated titleICASSP
Country/TerritoryUnited Kingdom
CityBrighton
Period12/05/201917/05/2019

Keywords

  • Neural vocoding
  • text-to-speech
  • GAN
  • glottal excitation model

Fingerprint

Dive into the research topics of 'Waveform generation for text-to-speech synthesis using pitch-synchronous multi-scale generative adversarial networks'. Together they form a unique fingerprint.

Cite this