Gender parity in peer assessment of team software development projects

Tom Crick, Tom Prickett, Jill Bradnum, Alan Godfrey

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Citations (Scopus)
7 Downloads (Pure)


Development projects in which small teams of learners develop software/digital artefacts are common features of computing-related degree programmes. Within these team projects, it can be problematic ensuring students are fairly recognised and rewarded for the contribution they make to the collective team effort and outputs. Peer assessment is a commonly used approach to promote fairness and due recognition. Maintaining parity within assessment processes is also a critical aspect of fairness. This paper presents the processes employed for the operation of one such team project at a UK higher education institution, using the Team-Q rubric and analysing the impact of the (self-identified) gender of learner marking and the learner being marked on the scores obtained. The results from this institutional sample (N=121) using the Team-Q metric offers evidence of gender parity in this context. This study also makes the case for continued vigilance to ensure Team-Q and other rubrics are used in a manner that supports gender parity in computing.
Original languageEnglish
Title of host publicationCEP 2022
Subtitle of host publicationComputing Education Practice Conference
EditorsRosanne English, Craig Stewart
Place of PublicationNew York
Number of pages4
ISBN (Electronic)9781450395618
Publication statusPublished - 6 Jan 2022
EventACM Computing Education Practice Conference 2022 - Durham University, Durham, United Kingdom
Duration: 6 Jan 20226 Jan 2022
Conference number: 6


ConferenceACM Computing Education Practice Conference 2022
Abbreviated titleCEP 20202
Country/TerritoryUnited Kingdom
Internet address


Dive into the research topics of 'Gender parity in peer assessment of team software development projects'. Together they form a unique fingerprint.

Cite this