Benchmarking introductory programming exams: Some preliminary results

Simon, Judy Sheard, Daryl D'Souza, Peter Klemperer, Leo Porter, Juha Sorva, Martijn Stegeman, Daniel Zingaro

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

9 Citations (Scopus)


The programming education literature includes many observations that pass rates are low in introductory programming courses, but few or no comparisons of student performance across courses. This paper addresses that shortcoming. Having included a small set of identical questions in the final examinations of a number of introductory programming courses, we illustrate the use of these questions to examine the relative performance of the students both across multiple institutions and within some institutions. We also use the questions to quantify the size and overall difficulty of each exam. We find substantial differences across the courses, and venture some possible explanations of the differences. We conclude by explaining the potential benefits to instructors of using the same questions in their own exams.

Original languageEnglish
Title of host publicationICER 2016 - Proceedings of the 2016 ACM Conference on International Computing Education Research
Number of pages9
ISBN (Electronic)9781450344494
Publication statusPublished - 25 Aug 2016
MoE publication typeA4 Conference publication
EventACM Conference on International Computing Education Research - Melbourne, Australia
Duration: 8 Sept 201612 Sept 2016
Conference number: 12


ConferenceACM Conference on International Computing Education Research
Abbreviated titleICER


Dive into the research topics of 'Benchmarking introductory programming exams: Some preliminary results'. Together they form a unique fingerprint.

Cite this