Open Research Newcastle
Browse

Benchmarking introductory programming exams: how and why

Download (287.2 kB)
conference contribution
posted on 2025-05-09, 12:18 authored by Simon, Judy Sheard, Daryl D’Souza, Peter Klemperer, Leo Porter, Juha Sorva, Martjin Stegeman, Daniel Zingaro
Ten selected questions have been included in 13 introductory programming exams at seven institutions in five countries. The students’ results on these questions, and on the exams as a whole, lead to the development of a benchmark against which the exams in other introductory programming courses can be assessed. We illustrate some potential benefits of comparing exam performance against this benchmark, and show other uses to which it can be put, for example to assess the size and the overall difficulty of an exam. We invite others to apply the benchmark to their own courses and to share the results with us.

History

Source title

ITiCSE '16 Proceedings of the 2016 ACM Conference on Innovation and Technology in Computer Science Education

Name of conference

2016 ACM Conference on Innovation and Technology in Computer Science Education

Location

Arequipa, Peru

Start date

2016-07-11

End date

2016-07-13

Pagination

154-159

Publisher

Association for Computing Machinery (ACM)

Place published

New York, NY

Language

  • en, English

College/Research Centre

Faculty of Engineering and Built Environment

School

School of Electrical Engineering and Computer Science

Rights statement

© ACM, 2016. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ITiCSE '16 Proceedings of the 2016 ACM Conference on Innovation and Technology in Computer Science Education http://doi.acm.org/10.1145/2899415.2899473

Usage metrics

    Publications

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC