Evaluating CEFR rater performance through the analysis of spoken learner corpora

Lan-Fen Huang*, Simon Kubelec, Nicole Keng, Lung-hsun Hsu

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

Abstract

Background
Although teachers of English are required to assess students’ speaking proficiency in the Common European Framework of Reference for Languages (CEFR), their ability to rate is seldom evaluated. The application of descriptors in the assessment of English speaking on CEFR in the context of English as a foreign language has not often been investigated, either.

Methods
The present study first introduced a form of rater standardization training. Two trained raters then assessed the speaking proficiency of 100 learners by means of actual corpus data. The study then compared their rating results to evaluate inter-rater reliability. Next, ten samples of exact/adjacent agreement between Raters 1 and 2 were rated by six teachers of English in tertiary education. Two of them had attended rater standardization training with Raters 1 and 2, while the other four had not received any relevant training.

Results
The two raters agreed exactly in 44% of cases. The rating results between the two trained raters were closely correlated (ρ = .893). Cross-tabulation showed that in one third of the samples, Rater 2 scored higher than Rater 1 and they agreed more often at the higher levels. The better rating performance of Teachers 1 and 2 suggested that rater standardization training may have helped enhance their performance. The unsatisfactory proportion of correctly assigned levels in teachers’ ratings overall was probably due to the high input of subjective judgment based on vague CEFR descriptors.

Conclusions
Regarding assessment, it is shown that the attendance of rater standardization training is of help in assessing learners’ speaking proficiency in CEFR. This study provides a model for assessing data from spoken learner corpora, which adds an important attribute to future studies of learner corpora. The paper also raises doubts about teachers’ ability to evaluate students’ speaking proficiency in CEFR. As CEFR has been widely adopted in the relevant fields of English language teaching and assessment, it is suggested that the rating training framework established in this study, which uses learner corpus data, be offered to (prospective) teachers of English in tertiary education.
Original languageEnglish
Article number14
Pages (from-to)1-17
JournalLanguage Testing in Asia
Volume8
DOIs
Publication statusPublished - 2018
MoE publication typeA1 Journal article-refereed

Keywords

  • CEFR
  • Learner Corpora

Fingerprint

Dive into the research topics of 'Evaluating CEFR rater performance through the analysis of spoken learner corpora'. Together they form a unique fingerprint.

Cite this