Benchmarking introductory programming exams: Some preliminary results

Simon, Judy Sheard, Daryl D'Souza, Peter Klemperer, Leo Porter, Juha Sorva, Martijn Stegeman, Daniel Zingaro

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

9 Citations (Scopus)

Abstract

The programming education literature includes many observations that pass rates are low in introductory programming courses, but few or no comparisons of student performance across courses. This paper addresses that shortcoming. Having included a small set of identical questions in the final examinations of a number of introductory programming courses, we illustrate the use of these questions to examine the relative performance of the students both across multiple institutions and within some institutions. We also use the questions to quantify the size and overall difficulty of each exam. We find substantial differences across the courses, and venture some possible explanations of the differences. We conclude by explaining the potential benefits to instructors of using the same questions in their own exams.

Original languageEnglish
Title of host publicationICER 2016 - Proceedings of the 2016 ACM Conference on International Computing Education Research
PublisherACM
Pages103-111
Number of pages9
ISBN (Electronic)9781450344494
DOIs
Publication statusPublished - 25 Aug 2016
MoE publication typeA4 Article in a conference publication
EventACM Conference on International Computing Education Research - Melbourne, Australia
Duration: 8 Sep 201612 Sep 2016
Conference number: 12

Conference

ConferenceACM Conference on International Computing Education Research
Abbreviated titleICER
CountryAustralia
CityMelbourne
Period08/09/201612/09/2016

Fingerprint Dive into the research topics of 'Benchmarking introductory programming exams: Some preliminary results'. Together they form a unique fingerprint.

Cite this