Dysarthric speech classiﬁcation from coded telephone speech using glottal features
parameters and (2) parameters based on principal component analysis (PCA). In addition, acoustic features are extracted from coded telephone speech using the openSMILE toolkit. The proposed method utilizes both acoustic and glottal features extracted from coded speech utterances and their corresponding dysarthric/healthy labels to train support vector machine classiﬁers. Separate
classiﬁers are trained using both individual, and the combination of glottal and acoustic features. The coded telephone speech used in the experiments is generated using the adaptive multi-rate codec, which operates in two transmission bandwidths: narrowband (300 Hz - 3.4 kHz) and wideband (50 Hz - 7 kHz). The experiments were conducted using dysarthric and healthy speech utterances of the TORGO and universal access speech (UA-Speech) databases. Classiﬁcation accuracy results indicated the eﬀectiveness of glottal features in the identiﬁcation of dysarthria from coded telephone speech. The results also showed that the glottal features in combination with the openSMILE-based acoustic features resulted in improved classiﬁcation accuracies, which validate the complementary nature of glottal features. The proposed dysarthric speech classiﬁcation method can potentially be employed in telemonitoring application for identifying the presence of dysarthria from coded telephone speech.
|Tila||Julkaistu - heinäkuuta 2019|
|OKM-julkaisutyyppi||A1 Julkaistu artikkeli, soviteltu|