Propagating Large Language Models Programming Feedback

Tutkimustuotos: Artikkeli kirjassa/konferenssijulkaisussaConference article in proceedingsScientificvertaisarvioitu

11 Lataukset (Pure)

Abstrakti

Large language models (LLMs) such as GPT-4 have emerged as promising tools for providing programming feedback. However, effective deployment of LLMs in massive classes and Massive Open Online Courses (MOOCs) raises financial concerns, calling for methods to minimize the number of calls to the APIs and systems serving such powerful models. In this article, we revisit the problem of 'propagating feedback' within the contemporary landscape of LLMs. Specifically, we explore feedback propagation as a way to reduce the cost of leveraging LLMs for providing programming feedback at scale. Our study investigates the effectiveness of this approach in the context of students requiring next-step hints for Python programming problems, presenting initial results that support the viability of the approach. We discuss our findings' implications and suggest directions for future research in optimizing feedback mechanisms for large-scale educational environments.

AlkuperäiskieliEnglanti
OtsikkoL@S 2024 - Proceedings of the 11th ACM Conference on Learning @ Scale
KustantajaACM
Sivut366-370
Sivumäärä5
ISBN (elektroninen)979-8-4007-0633-2
DOI - pysyväislinkit
TilaJulkaistu - 9 heinäk. 2024
OKM-julkaisutyyppiA4 Artikkeli konferenssijulkaisussa
TapahtumaACM Conference on Learning @ Scale - Atlanta, Yhdysvallat
Kesto: 18 heinäk. 202420 heinäk. 2024
Konferenssinumero: 11

Conference

ConferenceACM Conference on Learning @ Scale
LyhennettäL@S
Maa/AlueYhdysvallat
KaupunkiAtlanta
Ajanjakso18/07/202420/07/2024

Sormenjälki

Sukella tutkimusaiheisiin 'Propagating Large Language Models Programming Feedback'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.

Siteeraa tätä