Aarhus University Seal / Aarhus Universitets segl

Systematic driven student feedback on teaching
– A case study of AU BSS

A common way to measure teaching quality in higher education is to use student feedback in the form of student evaluation surveys to evaluate teaching (SET). Such feedback is often used: 1) as provision of feedback useful for the improvement of teaching practices and programmes, 2) as a means to provide recognition to teachers e.g. in relation to tenure and promotion reviews; and, 3) for research purposes e.g. in relation to learning, and educational performance (Davies et al. 2007; Marsh, 2007).

Since 2016, Aarhus BSS has systematically evaluated all courses on Bachelor's and Master's degree programmes. The overall objective of these evaluations is to ensure systematic and continuous support for ongoing efforts to improve the quality of degree programmes, and to involve students, teaching staff, directors of studies, etc. in an ongoing, institutionalised dialogue about student learning and the outcome of individual courses.

The use of SET has however also often been criticized. While research has shown that SETs can be useful for improving teaching, teachers often indicate a concern with the validity, reliability and usefulness of SETs in assessing their individual teaching efficiency (Young et al.2018). Some give the impression that they are implemented for the purpose of a regulatory obligation and provide little useful insight into the student learning experience on a course (Richardson, 2005). Others argue that universities are misdirected in measuring satisfaction as a proxy for teaching quality (Barrie, Ginns & Symon, 2008).

This study investigates if systematically driven student feedback on teaching is a suitable catalyst for thoughtful and sustainable pedagogical and academic development and for educational development of programs, and if so, in what way?

Method

Given that it is difficult to accurately measure whether the aim of the SET system is achieved, the study proposes to explore and examine how student evaluations “work in/and through practice”. To these ends, the study is designed as a mixed-methods study, which addresses the perceptions and practices of stakeholders at various levels of the organization.

Quantitative analysis identifies general developments during the past period, including relationships and effects between selected variables. The analysis are based on actual course evaluation data that covers the relation between 1) Overall SET outcome and 2) other course data, e.g. ECTS, Study level, Discipline, number students, 3) Student background variables, e.g. HSGPA, type high school diploma, age, gender, nationality, GPA  and grades for the course in question and 4) Teacher background variables. The dataset covers all Study programmes at BSS.

To nuance the quantitative results and to produce deeper insight into how course evaluations are perceived and used, we also conduct in-depth interviews with students, teachers, and organizational stakeholders. The purpose of the qualitative analysis is partly to look at the interplay between efforts and stakeholders, partly to look at the effects of these efforts from the perspective of various stakeholders. 

Perspectives

This case study contributes with knowledge on the barriers and facilitators affecting implementation of quality development instruments. We examine in detail to what extent and how course evaluations are being used, in particular to raise awareness regarding the development of teaching and learning at Aarhus BSS. The intention is also to promote strategic quality enhancement initiatives as well as to develop indicators of educational performance e.g. by delivering formative feedback to teachers about  what worked well in their teaching, including how and why it was successful.  Ideally, it will inform us how teaching and learning activities can be improved and how to support the best possible conditions for student learning and completion rates and aspects associated with recruitment. In this way, Aarhus BSS can use the results to innovate their own practice.

The results will also support future Higher Education quality work and measurement at policy and institutional levels.

Read more

  • Barrie, S. Ginns, P.  & Symon, R. (2008). Student surveys on teaching and learning, The University of Sydney. Report

  • Davies, M; Hirschberg, J; Lye, J; Johnston, C; and McDonald, I. (2007), Systematic influences on teaching evaluations: the case for caution. Australian Economic Papers, 46: 18-38. Doi:10.1111/j.1467-8454.2007.00303.x
  • Marsh, H. W. (2007). Students’ evaluations of university teaching: Dimensionality, reliability, validity, potential biases and usefulness. In Perry, R & Smart, J.C. (2007) “The Scholarship of Teaching and Learning in Higher Education: An evidence- based perspective” Springer, Dordrecht, Netherlands
  • Richardson, J. (2005) Instruments for obtaining student feedback: a review of the literature Assessment and evaluation in higher education. 30: 4, 387-415
  • Young, K; Joines, J; Standish, T; & Gallagher, V. (2018) Student evaluations of teaching: the impact of faculty procedures on response rates, Assessment & Evaluation in Higher Education, DOI: 10.1080/02602938.2018.1467878

Berit Lassesen

Associate Professor

Collaborators

The project team includes Associate Professor Berit Lassesen, Center for Educational Development, Aarhus University (project coordinator), Associate professor Ebbe Krogh Graversen, Center for Research Analyses, Aarhus University, Associate professor Lise Degn, Center for Research Analyses, Aarhus University and Professor Carter Walter Bloch (PI), Center for Research Analyses, Aarhus University

The project is part of PIQUED (Pathways to Improve Quality in Higher Education), and supported by the Danish Agency for Higher Education and Science: Rammebevilling (Bevillings-ID: 7118-00001B)