Projects per year
In production of voiced speech, epochs or glottal closure instants (GCIs) refer to the instants of significant excitation of the vocal tract. Extraction of GCIs is used as a pre-processing stage in many areas of speech technology, such as in prosody modification, speech synthesis and voice source analysis. In the past decades, several GCI detection algorithms have been developed and most of them provide excellent results for speech signals produced using modal (normal) type of phonation. There are, however, no studies comparing multiple state-of-the-art GCI detection methods in emotional speech. In this paper, we compare six GCI detection algorithms using emotional speech and known evaluation metrics. We use the Berlin EMO-DB acted emotional speech database which contains seven emotions and simultaneous electroglottography (EGG) recordings as ground truth. The results show that all six GCI detection algorithms give best performance in processing speech of neutral emotion and that the performance degrade particularly in emotions of high arousal (anger and joy). To improve the performance of GCI detection in emotional speech, the study underlines the importance of local average pitch period estimates.
|Title of host publication||2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Proceedings|
|Number of pages||5|
|Publication status||Published - May 2020|
|MoE publication type||A4 Article in a conference publication|
|Event||IEEE International Conference on Acoustics, Speech, and Signal Processing - Virtual conference, Barcelona, Spain|
Duration: 4 May 2020 → 8 May 2020
Conference number: 45
|Name||Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing|
|Conference||IEEE International Conference on Acoustics, Speech, and Signal Processing|
|Period||04/05/2020 → 08/05/2020|
- Excitation source
- Glottal Closure Instants
- Speech analysis
FingerprintDive into the research topics of 'Comparison of glottal closure instants detection algorithms for emotional speech'. Together they form a unique fingerprint.
- 1 Finished
Interdisciplinary research on statistical parametric speech synthesis
Alku, P., Nonavinakere Prabhakera, N., Bollepalli, B., Bäckström, T., Murtola, T., Airaksinen, M. & Juvela, L.
01/01/2018 → 31/12/2019
Project: Academy of Finland: Other research funding