Glottal source estimation from coded telephone speech using a deep neural network

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review


Research units


In speech analysis, the information about the glottal source is obtained from speech by using glottal inverse filtering (GIF). The accuracy of state-of-the-art GIF methods is sufficiently high when the input speech signal is of high-quality (i.e., with little noise or reverberation). However, in realistic conditions, particularly when GIF is computed from coded telephone speech, the accuracy of GIF methods deteriorates severely. To robustly estimate the glottal source under coded condition, a deep neural network (DNN)-based method is proposed. The proposed method utilizes a DNN to map the speech features extracted from the coded speech to the glottal flow waveform estimated from the corresponding clean speech. To generate the coded telephone speech, adaptive multi-rate (AMR) codec is utilized which is a widely used speech compression method. The proposed glottal source estimation method is compared with two existing GIF methods, closed phase covariance analysis (CP) and iterative adaptive inverse filtering (IAIF). The results indicate that the proposed DNN-based method is capable of estimating glottal flow waveforms from coded telephone speech with a considerably better accuracy in comparison to CP and IAIF.


Original languageEnglish
Title of host publicationProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Publication statusPublished - Aug 2017
MoE publication typeA4 Article in a conference publication
EventInterspeech - Stockholm, Sweden
Duration: 20 Aug 201724 Aug 2017
Conference number: 18

Publication series

NameInterspeech: Annual Conference of the International Speech Communication Association
ISSN (Electronic)1990-9772


Internet address

    Research areas

  • glottal source estimation, glottal inverse filtering, deep neural network, telephone speech

Download statistics

No data available

ID: 14239083