Abstrakti
Large language models (LLMs) such as GPT-4 have emerged as promising tools for providing programming feedback. However, effective deployment of LLMs in massive classes and Massive Open Online Courses (MOOCs) raises financial concerns, calling for methods to minimize the number of calls to the APIs and systems serving such powerful models. In this article, we revisit the problem of 'propagating feedback' within the contemporary landscape of LLMs. Specifically, we explore feedback propagation as a way to reduce the cost of leveraging LLMs for providing programming feedback at scale. Our study investigates the effectiveness of this approach in the context of students requiring next-step hints for Python programming problems, presenting initial results that support the viability of the approach. We discuss our findings' implications and suggest directions for future research in optimizing feedback mechanisms for large-scale educational environments.
Alkuperäiskieli | Englanti |
---|---|
Otsikko | L@S 2024 - Proceedings of the 11th ACM Conference on Learning @ Scale |
Kustantaja | ACM |
Sivut | 366-370 |
Sivumäärä | 5 |
ISBN (elektroninen) | 979-8-4007-0633-2 |
DOI - pysyväislinkit | |
Tila | Julkaistu - 9 heinäk. 2024 |
OKM-julkaisutyyppi | A4 Artikkeli konferenssijulkaisussa |
Tapahtuma | ACM Conference on Learning @ Scale - Atlanta, Yhdysvallat Kesto: 18 heinäk. 2024 → 20 heinäk. 2024 Konferenssinumero: 11 |
Conference
Conference | ACM Conference on Learning @ Scale |
---|---|
Lyhennettä | L@S |
Maa/Alue | Yhdysvallat |
Kaupunki | Atlanta |
Ajanjakso | 18/07/2024 → 20/07/2024 |