Waveform generation for text-to-speech synthesis using pitch-synchronous multi-scale generative adversarial networks

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

2 Citations (Scopus)
118 Downloads (Pure)

Abstract

The state-of-the-art in text-to-speech (TTS) synthesis has recently improved considerably due to novel neural waveform generation methods, such as WaveNet. However, these methods suffer from their slow sequential inference process, while their parallel versions are difficult to train and even more computationally expensive. Meanwhile, generative adversarial networks (GANs) have achieved impressive results in image generation and are making their way into audio applications; parallel inference is among their lucrative properties. By adopting recent advances in GAN training techniques, this investigation studies waveform generation for TTS in two domains (speech signal and glottal excitation). Listening test results show that while direct waveform generation with GAN is still far behind WaveNet, a GAN-based glottal excitation model can achieve quality and voice similarity on par with a WaveNet vocoder.
Original languageEnglish
Title of host publicationICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
PublisherIEEE
Pages6915 - 6919
Number of pages5
ISBN (Electronic)978-1-4799-8131-1
ISBN (Print)978-1-4799-8132-8
DOIs
Publication statusPublished - 1 May 2019
MoE publication typeA4 Article in a conference publication
EventIEEE International Conference on Acoustics, Speech, and Signal Processing - Brighton, United Kingdom
Duration: 12 May 201917 May 2019
Conference number: 44

Publication series

NameProceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing
ISSN (Print)1520-6149
ISSN (Electronic)2379-190X

Conference

ConferenceIEEE International Conference on Acoustics, Speech, and Signal Processing
Abbreviated titleICASSP
CountryUnited Kingdom
CityBrighton
Period12/05/201917/05/2019

Keywords

  • Neural vocoding
  • text-to-speech
  • GAN
  • glottal excitation model

Fingerprint Dive into the research topics of 'Waveform generation for text-to-speech synthesis using pitch-synchronous multi-scale generative adversarial networks'. Together they form a unique fingerprint.

  • Projects

    Equipment

    Science-IT

    Mikko Hakala (Manager)

    School of Science

    Facility/equipment: Facility

  • Cite this

    Juvela, L., Bollepalli, B., Yamagishi, J., & Alku, P. (2019). Waveform generation for text-to-speech synthesis using pitch-synchronous multi-scale generative adversarial networks. In ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 6915 - 6919). [8683271] (Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing). IEEE. https://doi.org/10.1109/ICASSP.2019.8683271