Decoding Logic Errors: A Comparative Study on Bug Detection by Students and Large Language Models

Stephen MacNeil, Paul Denny, Andrew Tran, Juho Leinonen, Seth Bernstein, Arto Hellas, Sami Sarsa, Joanne Kim

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

33 Downloads (Pure)

Abstract

Identifying and resolving logic errors can be one of the most frustrating challenges for novices programmers. Unlike syntax errors, for which a compiler or interpreter can issue a message, logic errors can be subtle. In certain conditions, buggy code may even exhibit correct behavior - in other cases, the issue might be about how a problem statement has been interpreted. Such errors can be hard to spot when reading the code, and they can also at times be missed by automated tests. There is great educational potential in automatically detecting logic errors, especially when paired with suitable feedback for novices. Large language models (LLMs) have recently demonstrated surprising performance for a range of computing tasks, including generating and explaining code. These capabilities are closely linked to code syntax, which aligns with the next token prediction behavior of LLMs. On the other hand, logic errors relate to the runtime performance of code and thus may not be as well suited to analysis by LLMs. To explore this, we investigate the performance of two popular LLMs, GPT-3 and GPT-4, for detecting and providing a novice-friendly explanation of logic errors. We compare LLM performance with a large cohort of introductory computing students (n = 964) solving the same error detection task. Through a mixed-methods analysis of student and model responses, we observe significant improvement in logic error identification between the previous and current generation of LLMs, and find that both LLM generations significantly outperform students. We outline how such models could be integrated into computing education tools, and discuss their potential for supporting students when learning programming.

Original languageEnglish
Title of host publicationACE 2024 - Proceedings of the 26th Australasian Computing Education Conference, Held in conjunction with
Subtitle of host publicationAustralasian Computer Science Week
PublisherACM
Pages11-18
Number of pages8
ISBN (Electronic)9798400716195
DOIs
Publication statusPublished - 29 Jan 2024
MoE publication typeA4 Conference publication
EventAustralasian Computing Education Conference - University of New South Wales, Sydney, Australia
Duration: 29 Jan 20242 Feb 2024
https://aceconference2024.github.io/aceconference2024/

Conference

ConferenceAustralasian Computing Education Conference
Abbreviated titleACE
Country/TerritoryAustralia
CitySydney
Period29/01/202402/02/2024
Internet address

Keywords

  • bug detection
  • computing education
  • generative AI
  • large language models
  • programming errors

Fingerprint

Dive into the research topics of 'Decoding Logic Errors: A Comparative Study on Bug Detection by Students and Large Language Models'. Together they form a unique fingerprint.

Cite this