Projects per year
Abstract
It is common knowledge that the quantity and quality of the training data play a significant role in the creation of a good machine learning model. In this paper, we take it one step further and demonstrate that the way the training examples are arranged is also of crucial importance. Curriculum Learning is
built on the observation that organized and structured assimilation of knowledge has the ability to enable faster training and better comprehension. When humans learn to speak, they first try to utter basic phones and then gradually move towards more complex structures such as words and sentences. This methodology is known as Curriculum Learning, and we employ it in the context of Automatic Speech Recognition. We hypothesize that end-to-end models can achieve better performance when provided with an organized training set consisting of examples that exhibit an increasing level of difficulty (i.e. a curriculum). To impose structure on the training set and to define the notion of an easy example, we explored multiple scoring functions that either use feedback from an external neural network or incorporate feedback from the model itself. Empirical results show that with different curriculums we can balance the training times and the network’s performance.
built on the observation that organized and structured assimilation of knowledge has the ability to enable faster training and better comprehension. When humans learn to speak, they first try to utter basic phones and then gradually move towards more complex structures such as words and sentences. This methodology is known as Curriculum Learning, and we employ it in the context of Automatic Speech Recognition. We hypothesize that end-to-end models can achieve better performance when provided with an organized training set consisting of examples that exhibit an increasing level of difficulty (i.e. a curriculum). To impose structure on the training set and to define the notion of an easy example, we explored multiple scoring functions that either use feedback from an external neural network or incorporate feedback from the model itself. Empirical results show that with different curriculums we can balance the training times and the network’s performance.
Original language | English |
---|---|
Title of host publication | Proceedings of Interspeech'22 |
Publisher | International Speech Communication Association (ISCA) |
Pages | 66-70 |
Number of pages | 5 |
Volume | 2022-September |
DOIs | |
Publication status | Published - 2022 |
MoE publication type | A4 Article in a conference publication |
Event | Interspeech - Incheon, Korea, Republic of Duration: 18 Sept 2022 → 22 Sept 2022 |
Publication series
Name | Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH |
---|---|
Publisher | International Speech Communication Association |
ISSN (Print) | 2308-457X |
ISSN (Electronic) | 1990-9772 |
Conference
Conference | Interspeech |
---|---|
Country/Territory | Korea, Republic of |
City | Incheon |
Period | 18/09/2022 → 22/09/2022 |
Keywords
- Curriculum Learning
- Automatic Speech Recognition
- End-to-End
Fingerprint
Dive into the research topics of 'Comparison and Analysis of New Curriculum Criteria for End-to-End ASR'. Together they form a unique fingerprint.Projects
- 1 Active
-
USSEE: Understanding Speech and Scene with Ears and Eyes
Kurimo, M., Virkkunen, A. & Grósz, T.
01/01/2022 → 31/12/2024
Project: Academy of Finland: Other research funding