Projects per year
Abstract
Code reading and comprehension skills are essential for novices learning programming, and explain-in-plain-English tasks (EiPE) are a well-established approach for assessing these skills. However, manual grading of EiPE tasks is time-consuming and this has limited their use in practice. To address this, we explore an approach where students explain code samples to a large language model (LLM) which generates code based on their explanations. This generated code is then evaluated using test suites, and shown to students along with the test results. We are interested in understanding how automated formative feedback from an LLM guides students’ subsequent prompts towards solving EiPE tasks. We analyzed 177 unique attempts on four EiPE exercises from 21 students, looking at what kinds of mistakes they made and how they fixed them. We found that when students made mistakes, they identified and corrected them using either a combination of the LLM-generated code and test case results, or they switched from describing the purpose of the code to describing the sample code line-by-line until the LLM-generated code exactly matched the obfuscated sample code. Our findings suggest both optimism and caution with the use of LLMs for unmonitored formative feedback. We identified false positive and negative cases, helpful variable naming, and clues of direct code recitation by students. For most students, this approach represents an efficient way to demonstrate and assess their code comprehension skills. However, we also found evidence of misconceptions being reinforced, suggesting the need for further work to identify and guide students more effectively.
Original language | English |
---|---|
Title of host publication | SIGCSE TS 2025 - Proceedings of the 56th ACM Technical Symposium on Computer Science Education |
Publisher | ACM |
Pages | 575-581 |
Number of pages | 7 |
Volume | 1 |
ISBN (Electronic) | 979-8-4007-0531-1 |
DOIs | |
Publication status | Published - 18 Feb 2025 |
MoE publication type | A4 Conference publication |
Event | ACM Technical Symposium on Computer Science Education - Pittsburgh, United States Duration: 26 Feb 2025 → 1 Mar 2025 Conference number: 56 |
Conference
Conference | ACM Technical Symposium on Computer Science Education |
---|---|
Abbreviated title | SIGCSE |
Country/Territory | United States |
City | Pittsburgh |
Period | 26/02/2025 → 01/03/2025 |
Keywords
- EiPE
- explain in plain English
- formative feedback
- large language models
- LLM
- misconceptions
- qualitative analysis
Fingerprint
Dive into the research topics of 'Exploring Student Reactions to LLM-Generated Feedback on Explain in Plain English Problems'. Together they form a unique fingerprint.Projects
- 1 Active
-
Leinonen Juho /AT tot.: Large Language Models for Computing Education
Leinonen, J. (Principal investigator)
01/09/2023 → 31/08/2027
Project: RCF Academy Research Fellow (new)