WAV2VEC-based detection and severity level classification of dysarthria from speech

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

18 Citations (Scopus)

Abstract

Automatic detection and severity level classification of dysarthria directly from acoustic speech signals can be used as a tool in medical diagnosis. In this work, the pre-trained wav2vec 2.0 model is studied as a feature extractor to build detection and severity level classification systems for dysarthric speech. The experiments were carried out with the popularly used UA-speech database. In the detection experiments, the results revealed that the best performance was obtained using the embeddings from the first layer of the wav2vec model that yielded an absolute improvement of 1.23% in accuracy compared to the best performing baseline feature (spectrogram). In the studied severity level classification task, the results revealed that the embeddings from the final layer gave an absolute improvement of 10.62% in accuracy compared to the best baseline features (mel-frequency cepstral coefficients).
Original languageEnglish
Title of host publicationProceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP’23)
PublisherIEEE
Number of pages5
ISBN (Electronic)978-1-7281-6327-7
DOIs
Publication statusPublished - 2023
MoE publication typeA4 Conference publication
EventIEEE International Conference on Acoustics, Speech, and Signal Processing - Rhodes Island, Greece
Duration: 4 Jun 202310 Jun 2023

Publication series

Name Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing
ISSN (Electronic)2379-190X

Conference

ConferenceIEEE International Conference on Acoustics, Speech, and Signal Processing
Abbreviated titleICASSP
Country/TerritoryGreece
CityRhodes Island
Period04/06/202310/06/2023

Fingerprint

Dive into the research topics of 'WAV2VEC-based detection and severity level classification of dysarthria from speech'. Together they form a unique fingerprint.

Cite this