One of the issues that was raised was student evaluations. The main student group for all arts students and science students has a standard student evaluation form that they distribute in most classes. The results are published in the Anti-Calendar.
The Department of Biochemistry does not participate in this exercise and, consequently, none of our courses are in the Anti-calendar. We use our own evaluation forms with very different questions and the results are summarized for internal use within the department.
Some students suspect that the department has blocked the publication of student evaluations in the Anti-Calendar. They suspect that the reason for doing this is all of our courses have very bad ratings and we don't want students to find out how bad we are. (That kind of reasoning may actually work in our favor. Students who think like that stay out of our courses.)
The truth is that we have been doing our own evaluations for 40 years and we have different questions, and a different scale, than the ones used by the student union. That's the only reason why we're not in the Anti-Calendar.
However, even if we switched to using the standard students forms, I would remain opposed to collecting and publishing student evaluations for another reason. (The following opinion is not departmental policy, unfortunately.)
I've blogged about this is the past [Student Evaluations] [Student Evaluations Don't Mean Much]. The facts are that student evaluations don't evaluate what students think they're evaluating. Many scientific studies have been done and the evidence strongly suggests that students evaluations are based mostly on whether students like the personality of the Professor.
I teach science and scientific reasoning. I think it's important to ask whether the collecting and publication of student evaluations is a worthwhile and valid exercise. If student evaluations are scientifically justified then they should be published. If the evidence doesn't back up the claims then they are worthless. This isn't hard to follow, is it?
Publication of worthless student evaluations may actually be counter-productive. The result may turn students away from courses they should be taking and encourage them to take easy bird courses they should be avoiding.
Until it can be demonstrated that student evaluations are useful and scientifially valid, I will continue to exercise my right to block publication of my evaluations, regardless of any decision by the Department of Biochemistry. And I will continue to argue against using flawed student evaluations in tenure and promotion decisions. I will also oppose all attempts to reward faculty members for excellence in teaching based entirely—or mostly—on student evaluations. Any other position is anti-scientific, in my opinion. No competent scientist can ever justify relying on standard undergraduate student evaluations to evaluate teaching ability.
Let's hear what everyone else thinks of student evaluations.
Here are a few interesting links to stimulate discussion.
Part of the discussion requires that you understand the "Sandbox Experiment" as described in [Of What Value are Student Evaluations?].
... true believers (who too often seem to have a stake in selling institutions a workshop or an evaluation form) proclaim that student evaluations cannot be manipulated or subverted. Anyone who believes such claims needs to read the first part of Generation X Goes to College by Peter Sacks. This part is an autobiography of a tenure track experience by the author in an unnamed community college in the Northwest. Sacks, an accomplished journalist who is not a very accomplished teacher, soon finds himself in trouble with student evaluations. Sacks exploits affective factors to deliberately obtain higher evaluations, and describes in detail how he did it in Part 1 called "The Sandbox Experiment." Sacks obtains higher evaluations through a deliberate pandering, but not through promotion of any learning outcomes. For years, he manages not only to deceive students, but also peers and administrators and eventually gets tenure based on higher student evaluations. This is a brutal case study that many could find offensive, but it proves clearly that (1) student evaluations can indeed be manipulated, and (2) that faculty peer reviewers and administrators who should know better than to place such blind faith in student evaluations sometimes do not.Read Student Evaluations: A Critical Review for a description of the Dr. Fox Effect, another one of those standard examples that every one should be aware of if they want to debate the issue of student evaluations.
This article also has a pretty good discussion of the "academic freedom" issue—which I prefer to call the "controversy conundrum." It is a very real problem. The more controversial your lectures, the more likely you are to receive lower student evaluations of faculty (SEF). Yet, teaching controversial issues is the essence of a university education.
There exist simple and well-known ways for a professor to avoid giving offense. One technique, when a class ostensibly focuses on a controversial subject matter, is to focus one's lectures on what other people have said. For example, a professor may, without raising any eyebrows, teach an entire course of lectures on ethics without ever making an ethical statement, since he confines himself to making reports of what other people have said about ethics. This ensures that no one can take offense towards him. During classroom discussions, he may simply nod and make non-committal remarks such as "Interesting" and "What do the rest of you think about that?", regardless of what the students say. (This provides the added "advantage" of reducing the need both for preparation before class and for effort during class, on the part of the professor.) Although pedagogic goals may often require correcting students or challenging their logic, SEF-based performance evaluations provide no incentive to do so, while the risk of reducing student happiness provides a strong incentive not to do so. Some students may take offense, or merely experience negative feelings, upon being corrected, whereas it is unlikely that students would experience such negative feelings as a result of a professor's failure to correct them. Overall, SEF reward professors who tell their students what they want to hear.As far as I'm concerned, it's much more fun to tell students what they don't want to hear!
The article also makes a comment on the perception of students as consumers; and universities as businesses whose goal is to please the customer. Nothing could be further from the truth.
A fourth reason why SEF are widely used may be the belief that the university is a business and that the responsibility of any business is to satisfy the customer. Whether they measure teaching effectiveness or not, SEF are probably a highly accurate measure of student satisfaction (and the customer is always right, isn't he?). However, even if we agree to view the university as a business, the preceding line of thought rests upon a confusion about the product the university provides. Regardless of what they may themselves think at times, students do not come to college for entertainment; if they did, they might just as well watch MTV for four years and put that on their resumes. Students come to college for a diploma. A diploma is a certification by the institution that one has completed a course of study and thereby been college-educated. But that will mean nothing unless the college or university can maintain intellectual standards. A particular student may be happy to receive an easy A without having to work or learn much, but a college that makes a policy of providing such a product will find its diplomas decreasing in value.Here are some interesting comments from Professor Fich at the University of Toronto [Are Student Evaluations of Teaching Fair?].
Part of a university's responsibility may be to satisfy its students. But it is also a university's responsibility to educate those individuals whom it is certifying as educated. Unfortunately, those goals are often in conflict.
Finally, I'd like to hear from you on the following point. Why are student evaluations anonymous? Shouldn't we be encouraging students to stand up and take responsibility for their opinions rather than hiding behind anonymity? Yes, I'm well aware of the fact that students think they will be punished for a negative evaluation. This is an unreasonable and illogical fear in most cases (i.e., at a respectable university). The point of a university education is to engage in debate and discussion. Trust me, most Professors can take it. Most students should start learning how to do the same.
[Image Credit: The cartoon is from the ASSU Anti-Calendar]
5 comments :
Ah, if you had any idea of the way those damn things are treated as Holy Writ in promotion and tenure decisions at liberal arts colleges in the US, your head would explode...
At the college I taught at, the means of the numerical responses were calculated to two decimal places. I once spent a fruitless hour trying to explain the concept of significant digits to a dean (who was an utterly innumerate classicist). I didn't make too many friends doing stuff like that in those days...
I am very glad to be long since out of that racket.
I'll echo what Steve said.
In my first semester, I had lower than average evaluations.
My colleagues taught me how to "cook them" (example: give them on days when only the better students show up) and presto, I "greatly improved" as a teacher.
Finally, we've been able to get most of the numbers off of our evaluation forms.
Re: why are student evaluations anonymous?
It doesn't make sense for it to be anonymous when the evaluation takes place at the end of a course since afterwards, the professor will presumably have nothing to do with the student anymore.
From my personal experience, it's true that most instructors can take criticism and recognize that university is all about discussion and debate. But instructors that have this disposition tend to be good teachers anyway and don't receive much criticism in the first place.
There are others, however, that have a massive ego and behave in the opposite way. They're crappy teachers but too proud to realize it and criticizing them only exacerbates the problem. (In fact, one of my instructors right now is like this). In these cases, I prefer to remain anonymous.
One note on the anonymity: If the evaluation forms allow for comments, the prof can often tell who wrote it. At least, that's been my experience as a lecturer when looking at my evaluations.
I have posted 3 entries on my recent student evaluations with a couple more to come. Its interesting to compare and contrast evaluations between different courses. Personally, I have my own questions that I ask which are extremely helpful not the generic ones provided by my University.
Post a Comment