Abstract
Silent Speech Interfaces (SSI) perform articulatory-to-acoustic mapping to convert articulatory movement into synthesized speech. Its main goal is to aid the speech handicapped, or to be used as a part of a communication system operating in silence-required environments or in those with high background noise. Although many previous studies addressed the speaker-dependency of SSI models, session-dependency is also an important issue due to the possible misalignment of the recording equipment. In particular, there are currently no solutions available, in the case of tongue ultrasound recordings. In this study, we investigate the degree of session-dependency of standard feed-forward DNN-based models for ultrasound-based SSI systems. Besides examining the amount of training data required for speech synthesis parameter estimation, we also show that DNN adaptation can be useful for handling session dependency. Our results indicate that by using adaptation, less training data and training time are needed to achieve the same speech quality over training a new DNN from scratch. Our experiments also suggest that the sub-optimal cross-session behavior is caused by the misalignment of the recording equipment, as adapting just the lower, feature extractor layers of the neural network proved to be sufficient, in achieving a comparative level of performance.
Original language | English |
---|---|
Pages (from-to) | 109-124 |
Number of pages | 16 |
Journal | Acta Polytechnica Hungarica |
Volume | 17 |
Issue number | 7 |
DOIs | |
Publication status | Published - 1 Jan 2020 |
MoE publication type | A1 Journal article-refereed |
Keywords
- Articulatory-to-acoustic mapping
- Deep Neural Networks
- DNN adaptation
- Session dependency
- Silent speech interfaces