Speaker-independent neural formant synthesis

Pablo Perez Zarazaga, Zofia Malisz, Gustaf Eje, Lauri Juvela

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

Abstract

We describe speaker-independent speech synthesis driven by a small set of phonetically meaningful speech parameters such as formant frequencies. The intention is to leverage deep-learning advances to provide a highly realistic signal generator that includes control affordances required for stimulus creation in the speech sciences. Our approach turns input speech parameters into predicted mel-spectrograms, which are rendered into waveforms by a pre-trained neural vocoder. Experiments with WaveNet and HiFi-GAN confirm that the method achieves our goals of accurate control over speech parameters combined with high perceptual audio quality. We also find that the small set of phonetically relevant speech parameters we use is sufficient to allow for speaker-independent synthesis (a.k.a. universal vocoding).
Original languageEnglish
Title of host publicationProceedings of Interspeech 2023
PublisherInternational Speech Communication Association (ISCA)
Number of pages5
DOIs
Publication statusPublished - 2023
MoE publication typeA4 Conference publication
EventInterspeech - Dublin, Ireland
Duration: 20 Aug 202324 Aug 2023

Publication series

NameInterspeech
PublisherInternational Speech Communication Association
ISSN (Electronic)2958-1796

Conference

ConferenceInterspeech
Country/TerritoryIreland
CityDublin
Period20/08/202324/08/2023

Fingerprint

Dive into the research topics of 'Speaker-independent neural formant synthesis'. Together they form a unique fingerprint.

Cite this