Evaluating the Performance of Code Generation Models for Solving Parsons Problems with Small Prompt Variations

Brent Reeves, Sami Sarsa, James Prather, Paul Denny, Brett A. Becker, Arto Hellas, Bailey Kimmel, Garrett Powell, Juho Leinonen

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

21 Citations (Scopus)
44 Downloads (Pure)

Abstract

The recent emergence of code generation tools powered by large language models has attracted wide attention. Models such as OpenAI Codex can take natural language problem descriptions as input and generate highly accurate source code solutions, with potentially significant implications for computing education. Given the many complexities that students face when learning to write code, they may quickly become reliant on such tools without properly understanding the underlying concepts. One popular approach for scaffolding the code writing process is to use Parsons problems, which present solution lines of code in a scrambled order. These remove the complexities of low-level syntax, and allow students to focus on algorithmic and design-level problem solving. It is unclear how well code generation models can be applied to solve Parsons problems, given the mechanics of these models and prior evidence that they underperform when problems include specific restrictions. In this paper, we explore the performance of the Codex model for solving Parsons problems over various prompt variations. Using a corpus of Parsons problems we sourced from the computing education literature, we find that Codex successfully reorders the problem blocks about half of the time, a much lower rate of success when compared to prior work on more free-form programming tasks. Regarding prompts, we find that small variations in prompting have a noticeable effect on model performance, although the effect is not as pronounced as between different problems.

Original languageEnglish
Title of host publicationITiCSE 2023 - Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education
PublisherACM
Pages299-305
Number of pages7
ISBN (Electronic)979-8-4007-0138-2
DOIs
Publication statusPublished - 29 Jun 2023
MoE publication typeA4 Conference publication
EventAnnual Conference on Innovation and Technology in Computer Science Education - Turku, Finland
Duration: 8 Jul 202312 Jul 2023
Conference number: 28

Conference

ConferenceAnnual Conference on Innovation and Technology in Computer Science Education
Abbreviated titleITiCSE
Country/TerritoryFinland
CityTurku
Period08/07/202312/07/2023

Keywords

  • academic integrity
  • ai
  • artificial intelligence
  • chatgpt
  • code generation
  • code writing
  • codex
  • computer programming
  • copilot
  • CS1
  • deep learning
  • generative ai
  • GitHub
  • GPT-3
  • introductory programming
  • large language models
  • machine learning
  • ML
  • natural language processing
  • neural networks
  • novice programming
  • openAI

Fingerprint

Dive into the research topics of 'Evaluating the Performance of Code Generation Models for Solving Parsons Problems with Small Prompt Variations'. Together they form a unique fingerprint.

Cite this