Abstract
A number of studies have questioned the criterion validity of peer assessed oral presentations. Claims have been made that students are likely to employ a different perspective from teachers when assessing overall presentation quality, even when both are guided by a common checklist of relevant skill components.To date, no empirical investigations have been undertaken to determine how students differ from staff in the criteria they apply. this paper analyses peer and teacher assessment data from thesis presentations made by engineering students in a fourth year communications subject. The data consists of peer and teacher ratings on eight skill components listed on a checklist (used for feedback only), together with a global mark for the presentation (the summative assessment).The scores on the eight items were then subject to multiple regression analysis using the global mark as the criterion. Substantial differences were found between the two multiple regression equations. Discussion focuses on how these differences affect the validity of peer assessments, and the level of agreement between teacher and student assessment.