The university has a website: Course Evaluations. Here's what the administrators say ...
Course evaluations are read by many people at UofT, including instructors, chairs, deans, the provost and the president. They are used for a variety of purposes, including:There's so much wrong with these statements that I hardly know where to begin. First, course evaluations may be read by some instructors but certainly not by chairs, the provost, and the President of the University of Toronto. At best, the departmental chair might routinely read a summary of the scores on course evaluations for a few lecturers. It's absurd to suggest that the President and/or Provost reads course evaluations in a university of 75,000 students taking on average five courses per semester.
It is essential that students have a voice in these key decision-making processes and that’s why your course evaluation is so important at the University of Toronto.
- To help instructors design and deliver their courses in ways that impact their students’ learning
- To make changes and improvements to individual courses and programs
- To assess instructors for annual reviews
- To assess instructors for tenure and promotion review
- To provide students with summary information about other students’ learning experiences in UofT courses
Your feedback is used to improve your courses, to better your learning experience and to recognize great teaching.
What are the legitimate purposes of student evaluations since we know that course evaluations are not good indicators of teaching effectiveness and since we know that students are not in a position to judge whether the material is accurate or up-to-date?
To help instructors design and deliver their courses in ways that impact their students’ learning
This may be true in some cases. Sometimes students submit valuable comments that can make a difference. This is especially true in upper level courses. On the other hand, most students are not knowledgeable about the different ways that courses can be taught. In a memorize/regurgitate course, for example, they are not demanding a switch to student-centered learning. Most of the fundamental changes in teaching come from instructors who really care about learning and not from student evaluations.To make changes and improvements to individual courses and programs
Again, there are a few times when comments from the students lead to minor improvements but, for the most part, changes and improvements are motivated by other concerns. It's rare that we encounter a student who really understands how to design a program and how it could be improved. They usually don't see the big picture.To assess instructors for annual reviews
In my experience at this university (36 years), this doesn't happen very often with professors. It does happen with sessional lecturers (see video below) but given what we know about the effectiveness of student evaluations from the pedagogical literature, it shouldn't happen at all.To assess instructors for tenure and promotion review
There might be rare exceptions when student evaluations are abused in this way but I've never seen an example of a tenure decision or a promotion that's been significantly affected by student evaluations. Maybe it's more important in other departments but I doubt it. This is a myth that needs busting. It's not how things work at a major research-intensive university. Nobody wants to admit this.To provide students with summary information about other students’ learning experiences in UofT courses.
It's probably true that some students pick some courses based on students evaluations. (They are made public.) The important questions are: should students be basing their decisions on student evaluations, and should the university be encouraging this?Here's a scary video from the University of Toronto website. It's scary because we are supposed to be teaching, and practicing, critical thinking and there's no evidence of that in the video. We are supposed to base our decisions on evidence and rationality but the pedagogical literature is almost unanimous in condemning students evaluations as realistic measures of teaching effectiveness. I wonder if the two professors in this video have actually studied the issue and read the literature?
Maybe they should read this article: Students don’t know what’s best for their own learning.
... universities that rely on student evaluations are likely to punish good teachers and encourage those who simply make it easy for students. Most universities have codes of conduct that require decisions to be made on valid evidence. Any manager discussing student evaluations when reviewing lecturers’ performance is probably breaching that part of their own job requirements. Given the evidence, student evaluations are a distraction from the responsibility to provide the best possible education for the nation.
For university teachers, the challenge remains the same it has always been: keeping students motivated, while ensuring that they learn. Part of achieving that requires teaching students how to learn, not just what to learn, and to ask them what they are working on and how hard they are trying. But it is probably best to avoid asking if they are happy with the course.
One of the additional motivations for change was to cut down on class time spent on student evaluations. The university decided to switch to online evaluations as promoted in this embarrassing video.
Guess what happened? The participation rate plummeted so that in many courses less than 25% of the students filled out the evaluations. Who could possibly have seen that coming?
Here's a paper that was just published but similar results have been published over the years in the pedagogical literature and those papers were certainly available to the decision makers at the University of Toronto before they decided to go online.
Capa-Aydin, Y. (2014) Student evaluation of instruction: comparison between in-class and online methods. Assessment & Evaluation in Higher Education, 1-15. published online December, 2014 [doi: 10.1080/02602938.2014.987106]
This study compares student evaluations of instruction that were collected in-class with those gathered through an online survey. The two modes of administration were compared with respect to response rate, psychometric characteristics and mean ratings through different statistical analyses. Findings indicated that in-class evaluations produced a significantly higher response rate than online evaluation. In addition, Rasch analysis showed that mean ratings obtained in in-class evaluation were significantly higher than those obtained in online evaluation. Finally, the distributions of student attendance and expected grade in both modes were compared via chi-square tests, and were found to differ in the two modes of administration.