Phase perception of the glottal excitation and its relevance in statistical parametric speech synthesis

Tuomo Raitio*, Lauri Juvela, Antti Suni, Martti Vainio, Paavo Alku

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review


While the characteristics of the amplitude spectrum of the voiced excitation have been studied widely both in natural and synthetic speech, the role of the excitation phase has remained less explored. This contradicts findings observed in sound perception studies indicating that humans are not phase deaf. Especially in speech synthesis, phase information is often omitted for simplicity. This study investigates the impact of phase information of the excitation signal of voiced speech and its relevance in statistical parametric speech synthesis. The experiments in the study involve, firstly, converting the pitch-synchronously computed original phase spectra of the excitation waveforms (either glottal flow waveforms or residuals) to either zero phase, cyclostationary random phase, or random phase. Secondly, the quality of synthetic speech in each case is compared in subjective listening tests to the corresponding signal excited with the original, natural phase. Experiments are conducted with natural, vocoded, and synthetic speech using voice material from various speakers with varying speaking styles, such as breathy, normal, and Lombard speech. The results indicate that the phase spectrum of the voiced excitation has a perceptually relevant effect in natural, vocoded, and synthetic speech, and utilizing the phase information in speech synthesis leads to improved speech quality.

Original languageEnglish
Pages (from-to)104–119
JournalSpeech Communication
Publication statusPublished - Jun 2016
MoE publication typeA1 Journal article-refereed


  • Glottal flow excitation
  • Phase perception
  • Statistical parametric speech synthesis
  • Vocoding


Dive into the research topics of 'Phase perception of the glottal excitation and its relevance in statistical parametric speech synthesis'. Together they form a unique fingerprint.

Cite this