Interpretable Latent Space Using Space-Filling Curves for Phonetic Analysis in Voice Conversion

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

160 Downloads (Pure)


Vector quantized variational autoencoders (VQ-VAE) are well-known deep generative models, which map input data to a latent space that is used for data generation. Such latent spaces are unstructured and can thus be difficult to interpret. Some earlier approaches have introduced a structure to the latent space through supervised learning by defining data labels as latent variables. In contrast, we propose an unsupervised technique incorporating space-filling curves into vector quantization (VQ), which yields an arranged form of latent vectors such that adjacent elements in the VQ codebook refer to similar content. We applied this technique to the latent codebook vectors of a VQ-VAE, which encode the phonetic information of a speech signal in a voice conversion task. Our experiments show there is a clear arrangement in latent vectors representing speech phones, which clarifies what phone each latent vector corresponds to and facilitates other detailed interpretations of latent vectors.
Original languageEnglish
Title of host publicationProceedings of Interspeech Conference
PublisherInternational Speech Communication Association (ISCA)
Number of pages5
Publication statusPublished - 2023
MoE publication typeA4 Conference publication
EventInterspeech - Dublin, Ireland
Duration: 20 Aug 202324 Aug 2023

Publication series

PublisherInternational Speech Communication Association
ISSN (Electronic)2958-1796




  • Interpretable latent space
  • phonetic analysis
  • space-filling curves
  • vector quantization
  • voice conversion


Dive into the research topics of 'Interpretable Latent Space Using Space-Filling Curves for Phonetic Analysis in Voice Conversion'. Together they form a unique fingerprint.

Cite this