Evaluating Language Models for Generating and Judging Programming Feedback

Charles Koutcheme, Nicola Dainese, Sami Sarsa, Arto Hellas, Juho Leinonen, Syed Ashraf, Paul Denny

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

6 Downloads (Pure)

Abstract

The emergence of large language models (LLMs) has transformed research and practice across a wide range of domains. Within the computing education research (CER) domain, LLMs have garnered significant attention, particularly in the context of learning programming. Much of the work on LLMs in CER, however, has focused on applying and evaluating proprietary models. In this article, we evaluate the efficiency of open-source LLMs in generating high-quality feedback for programming assignments and judging the quality of programming feedback, contrasting the results with proprietary models. Our evaluations on a dataset of students’ submissions to introductory Python programming exercises suggest that state-of-the-art open-source LLMs are nearly on par with proprietary models in both generating and assessing programming feedback. Additionally, we demonstrate the efficiency of smaller LLMs in these tasks and highlight the wide range of LLMs accessible, even for free, to educators and practitioners.

Original languageEnglish
Title of host publicationSIGCSE TS 2025 - Proceedings of the 56th ACM Technical Symposium on Computer Science Education
PublisherACM
Pages624-630
Number of pages7
Volume1
ISBN (Electronic)979-8-4007-0531-1
DOIs
Publication statusPublished - 18 Feb 2025
MoE publication typeA4 Conference publication
EventACM Technical Symposium on Computer Science Education - Pittsburgh, United States
Duration: 26 Feb 20251 Mar 2025
Conference number: 56

Conference

ConferenceACM Technical Symposium on Computer Science Education
Abbreviated titleSIGCSE
Country/TerritoryUnited States
CityPittsburgh
Period26/02/202501/03/2025

Keywords

  • automatic evaluation
  • automatic feedback
  • generative AI
  • large language models
  • LLM-as-a-judge
  • open source
  • programming feedback

Fingerprint

Dive into the research topics of 'Evaluating Language Models for Generating and Judging Programming Feedback'. Together they form a unique fingerprint.

Cite this