Projects per year
Abstract
Large language models (LLMs) have shown great potential for the automatic generation of feedback in a wide range of computing contexts. However, concerns have been voiced around the privacy and ethical implications of sending student work to proprietary models. This has sparked considerable interest in the use of open source LLMs in education, but the quality of the feedback that such open models can produce remains understudied. This is a concern as providing flawed or misleading generated feedback could be detrimental to student learning. Inspired by recent work that has utilised very powerful LLMs, such as GPT-4, to evaluate the outputs produced by less powerful models, we conduct an automated analysis of the quality of the feedback produced by several open source models using a dataset from an introductory programming course. First, we investigate the viability of employing GPT-4 as an automated evaluator by comparing its evaluations with those of a human expert. We observe that GPT-4 demonstrates a bias toward positively rating feedback while exhibiting moderate agreement with human raters, showcasing its potential as a feedback evaluator. Second, we explore the quality of feedback generated by several leading open-source LLMs by using GPT-4 to evaluate the feedback. We find that some models offer competitive performance with popular proprietary LLMs, such as ChatGPT, indicating opportunities for their responsible use in educational settings.
Original language | English |
---|---|
Title of host publication | ITiCSE 2024 - Proceedings of the 2024 Conference Innovation and Technology in Computer Science Education |
Publisher | ACM |
Pages | 52-58 |
Number of pages | 7 |
ISBN (Electronic) | 979-8-4007-0600-4 |
DOIs | |
Publication status | Published - 3 Jul 2024 |
MoE publication type | A4 Conference publication |
Event | Annual Conference on Innovation & Technology in Computer Science Education - Università degli Studi di Milano, Milan, Italy Duration: 8 Jul 2024 → 10 Jul 2024 Conference number: 29 https://iticse.acm.org/2024/ |
Conference
Conference | Annual Conference on Innovation & Technology in Computer Science Education |
---|---|
Abbreviated title | ITiCSE |
Country/Territory | Italy |
City | Milan |
Period | 08/07/2024 → 10/07/2024 |
Internet address |
Keywords
- automatic evaluation
- automatic feedback
- code llama
- generative ai
- gpt-4
- large language models
- llm-as-a-judge
- llms
- open source
- programming feedback
- zephyr
Fingerprint
Dive into the research topics of 'Open Source Language Models Can Provide Feedback : Evaluating LLMs' Ability to Help Students Using GPT-4-As-A-Judge'. Together they form a unique fingerprint.Projects
- 1 Active
-
Leinonen Juho /AT tot.: Large Language Models for Computing Education
Leinonen, J. (Principal investigator)
01/09/2023 → 31/08/2027
Project: RCF Academy Research Fellow (new)