Abstract
Large language models (LLMs) such as GPT-4 have emerged as promising tools for providing programming feedback. However, effective deployment of LLMs in massive classes and Massive Open Online Courses (MOOCs) raises financial concerns, calling for methods to minimize the number of calls to the APIs and systems serving such powerful models. In this article, we revisit the problem of 'propagating feedback' within the contemporary landscape of LLMs. Specifically, we explore feedback propagation as a way to reduce the cost of leveraging LLMs for providing programming feedback at scale. Our study investigates the effectiveness of this approach in the context of students requiring next-step hints for Python programming problems, presenting initial results that support the viability of the approach. We discuss our findings' implications and suggest directions for future research in optimizing feedback mechanisms for large-scale educational environments.
Original language | English |
---|---|
Title of host publication | L@S 2024 - Proceedings of the 11th ACM Conference on Learning @ Scale |
Publisher | ACM |
Pages | 366-370 |
Number of pages | 5 |
ISBN (Electronic) | 979-8-4007-0633-2 |
DOIs | |
Publication status | Published - 9 Jul 2024 |
MoE publication type | A4 Conference publication |
Event | ACM Conference on Learning @ Scale - Atlanta, United States Duration: 18 Jul 2024 → 20 Jul 2024 Conference number: 11 |
Conference
Conference | ACM Conference on Learning @ Scale |
---|---|
Abbreviated title | L@S |
Country/Territory | United States |
City | Atlanta |
Period | 18/07/2024 → 20/07/2024 |
Keywords
- computer science education
- large language models
- programming feedback