Benchmarking introductory programming exams: How and why

Simon, Judy Sheard, Daryl D'Souza, Peter Klemperer, Leo Porter, Juha Sorva, Martijn Stegeman, Daniel Zingaro

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

8 Citations (Scopus)

Abstract

Ten selected questions have been included in 13 introductory programming exams at seven institutions in five countries. The students' results on these questions, and on the exams as a whole, lead to the development of a benchmark against which the exams in other introductory programming courses can be assessed. We illustrate some potential benefits of comparing exam performance against this benchmark, and show other uses to which it can be put, for example to assess the size and the overall difficulty of an exam. We invite others to apply the benchmark to their own courses and to share the results with us.

Original languageEnglish
Title of host publicationITiCSE 2016 - Proceedings of the 2016 ACM Conference on Innovation and Technology in Computer Science Education
PublisherACM
Pages154-159
Number of pages6
Volume11-13-July-2016
ISBN (Electronic)9781450342315
DOIs
Publication statusPublished - 11 Jul 2016
MoE publication typeA4 Article in a conference publication
EventAnnual Conference on Innovation and Technology in Computer Science Education - Arequipa, Peru
Duration: 11 Jul 201613 Jul 2016
Conference number: 21

Conference

ConferenceAnnual Conference on Innovation and Technology in Computer Science Education
Abbreviated titleITiCSE
Country/TerritoryPeru
CityArequipa
Period11/07/201613/07/2016

Keywords

  • Benchmarking
  • Examination
  • Introductory programming

Fingerprint

Dive into the research topics of 'Benchmarking introductory programming exams: How and why'. Together they form a unique fingerprint.

Cite this