Propagating Large Language Models Programming Feedback

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

1 Citation (Scopus)
14 Downloads (Pure)

Abstract

Large language models (LLMs) such as GPT-4 have emerged as promising tools for providing programming feedback. However, effective deployment of LLMs in massive classes and Massive Open Online Courses (MOOCs) raises financial concerns, calling for methods to minimize the number of calls to the APIs and systems serving such powerful models. In this article, we revisit the problem of 'propagating feedback' within the contemporary landscape of LLMs. Specifically, we explore feedback propagation as a way to reduce the cost of leveraging LLMs for providing programming feedback at scale. Our study investigates the effectiveness of this approach in the context of students requiring next-step hints for Python programming problems, presenting initial results that support the viability of the approach. We discuss our findings' implications and suggest directions for future research in optimizing feedback mechanisms for large-scale educational environments.

Original languageEnglish
Title of host publicationL@S 2024 - Proceedings of the 11th ACM Conference on Learning @ Scale
PublisherACM
Pages366-370
Number of pages5
ISBN (Electronic)979-8-4007-0633-2
DOIs
Publication statusPublished - 9 Jul 2024
MoE publication typeA4 Conference publication
EventACM Conference on Learning @ Scale - Atlanta, United States
Duration: 18 Jul 202420 Jul 2024
Conference number: 11

Conference

ConferenceACM Conference on Learning @ Scale
Abbreviated titleL@S
Country/TerritoryUnited States
CityAtlanta
Period18/07/202420/07/2024

Keywords

  • computer science education
  • large language models
  • programming feedback

Fingerprint

Dive into the research topics of 'Propagating Large Language Models Programming Feedback'. Together they form a unique fingerprint.

Cite this