Conditional Spoken Digit Generation with StyleGAN

Kasperi Palkama, Lauri Juvela, Alexander Ilin

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

5 Citations (Scopus)
179 Downloads (Pure)

Abstract

This paper adapts a StyleGAN model for speech generation with minimal or no conditioning on text. StyleGAN is a multiscale convolutional GAN capable of hierarchically capturing data structure and latent variation on multiple spatial (or temporal) levels. The model has previously achieved impressive results on facial image generation, and it is appealing to audio applications due to similar multi-level structures present in the data. In this paper, we train a StyleGAN to generate melspectrograms on the Speech Commands dataset, which contains spoken digits uttered by multiple speakers in varying acoustic conditions. In a conditional setting our model is conditioned on the digit identity, while learning the remaining data variation remains an unsupervised task. We compare our model to the current unsupervised state-of-the-art speech synthesis GAN architecture, the WaveGAN, and show that the proposed model outperforms according to numerical measures and subjective evaluation by listening tests.

Original languageEnglish
Title of host publicationProceedings of Interspeech
PublisherInternational Speech Communication Association (ISCA)
Pages3166-3170
Number of pages5
DOIs
Publication statusPublished - 2020
MoE publication typeA4 Conference publication
EventInterspeech - Shanghai, China
Duration: 25 Oct 202029 Oct 2020
Conference number: 21
http://www.interspeech2020.org/

Publication series

NameInterspeech
PublisherInternational Speech Communication Association
ISSN (Electronic)1990-9772

Conference

ConferenceInterspeech
Abbreviated titleINTERSPEECH
Country/TerritoryChina
CityShanghai
Period25/10/202029/10/2020
Internet address

Fingerprint

Dive into the research topics of 'Conditional Spoken Digit Generation with StyleGAN'. Together they form a unique fingerprint.

Cite this