LLM-itation is the Sincerest Form of Data: Generating Synthetic Buggy Code Submissions for Computing Education

Juho Leinonen*, Paul Denny, Olli Kiljunen, Stephen MacNeil*, Sami Sarsa, Arto Hellas

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

3 Downloads (Pure)

Abstract

There is a great need for data in computing education research. Data is needed to understand how students behave, to train models of student behavior to optimally support students, and to develop and validate new assessment tools and learning analytics techniques. However, relatively few computing education datasets are shared openly, often due to privacy regulations and issues in making sure the data is anonymous. Large language models (LLMs) offer a promising approach to create large-scale, privacy-preserving synthetic data, which can be used to explore various aspects of student learning, develop and test educational technologies, and support research in areas where collecting real student data may be challenging or impractical. This work explores generating synthetic buggy code submissions for introductory programming exercises using GPT-4o. We compare the distribution of test case failures between synthetic and real student data from two courses to analyze the accuracy of the synthetic data in mimicking real student data. Our findings suggest that LLMs can be used to generate synthetic incorrect submissions that are not significantly different from real student data with regard to test case failure distributions. Our research contributes to the development of reliable synthetic datasets for computing education research and teaching, potentially accelerating progress in the field while preserving student privacy.

Original languageEnglish
Title of host publicationACE 2025 - Proceedings of the 27th Australasian Computing Education Conference, Held in conjunction with
EditorsCarolyn Seton, Simon
PublisherACM
Pages56-63
Number of pages8
ISBN (Electronic)979-8-4007-1425-2
DOIs
Publication statusPublished - 7 Apr 2025
MoE publication typeA4 Conference publication
EventAustralasian Computing Education Conference - Brisbane, Australia
Duration: 12 Feb 202513 Feb 2025
Conference number: 27

Conference

ConferenceAustralasian Computing Education Conference
Abbreviated titleACE
Country/TerritoryAustralia
CityBrisbane
Period12/02/202513/02/2025

Keywords

  • bugs
  • data generation
  • genAI
  • generative AI
  • GPT-4o
  • large language models
  • LLMs
  • prompt engineering
  • submissions
  • synthetic data

Fingerprint

Dive into the research topics of 'LLM-itation is the Sincerest Form of Data: Generating Synthetic Buggy Code Submissions for Computing Education'. Together they form a unique fingerprint.

Cite this