metadata only access
This article reports findings on the reliabilities of peer and teacher summative assessments of engineering students’ oral presentation skills in a fourth year communications subject. The context of the study is unusual, in that each oral presentation was subject to multiple ratings by teams of students and teams of academic staff. Analysis of variance procedures were used to provide separate estimates of inter-rater reliability of assessments by peers and teachers for classes in four succeeding years. Teacher ratings were found to have substantially higher levels of inter-rater agreement than peer ratings. Generalising over the four years, it would require the averaging of between two and four peer ratings to match the reliability of single teacher assessments. However, the estimates of individual rater reliability for teachers, based on the intra-class correlation coefficient, were moderately low (0.40 to 0.53). It is concluded that the reliability of summative assessments of oral presentations can be improved by combining teacher marks with the averaged marks obtained from multiple peer ratings.