Friday, May 18, 2007
The Paradox of RateMyProfessors.com
A reader wrote to ask a dean's-eye perspective on ratemyprofessors.com.
The short version: I consider it electronic gossip.
The slightly longer version: I've had enough social science training to know that small, self-selected samples lead to skewed outcomes.
The long version: and yet, from what I've seen, the ratings usually aren't that far off. Which is fascinating, given that the methodology is so bad.
I've never used it in anybody's evaluation, and won't. It's too easy to game, it uses criteria I don't recognize as valid measures of teaching performance (easiness of grading and “hotness”), and it can give far too much weight to the occasional disgruntled (or infatuated) student. I make a point of not even looking at it for people who are up for evaluation in a given year.
And yet, the few times I've checked it, the ratings have mostly been right. Comments posted alongside the ratings have often been scarily right. I'm fascinated by the paradox of an awful method leading to pretty good results.
Anybody who has read through teaching evaluations in bulk – and heaven help me, I have – can tell you that most students pay little heed to the individual questions or categories. If they liked the professor, they rate her high in everything; if they didn't, they rate her low in everything. It's not uncommon to see a line drawn down one column of twenty questions, giving the same answer to all twenty. (A blessed few students actually take the time and effort to disaggregate the questions. These are much more helpful. I recall one professor who got awful ratings in everything except something like “intellectually challenging,” where he was off the charts. Essentially, the students found him brilliant but incomprehensible.)
I think something similar happens with ratemyprofessors. Although it asks about easiness and hotness, a quick read of the comments suggests that the ratings really reflect a more general overall thumbs-up or thumbs-down. (The chili pepper is another issue.) The students who post to it actually make the tool smarter than it's designed to be. The comments, read in bulk, seem to suggest that students like clarity of presentation, clarity of grading criteria, a sense of humor, and a grounded ego. Honestly, so do I. They get grumpy at professors who take a month to return graded assignments. Honestly, so do I.
I've noticed, too, that they're pretty willing to 'forgive' harsh grading, if they believe that the harsh grading reflects high – as opposed to arbitrary – standards. And they loathe professors who spend significant class time on digressions about their personal lives. That strikes me as reasonable.
It's true that some professors have 'rock star' charisma and can get away with liberties that most of us couldn't, but I'm struck by the respect most students have for good teachers. I'm especially heartened by comments like “I busted my ass and still only got a B, but I never learned more.” That gives me hope. And I don't think it's out of bounds for a student to complain that he didn't get his first graded assignment back until Thanksgiving. Especially at the intro level, relatively frequent and prompt feedback makes a real difference.
I don't use rmp, but I'd recommend it at useful reading for instructors looking to get a reality check as to what students value. Just don't take the categories too literally; read the comments instead.
Coming from a teacher and student perspective I think if there is anything that RMP.com shows us, it's that as teachers we need to be more inwardly focused. Stop blaming the department or students and find something that you are doing and change it. If a student fails stop with this whole BS attitude of, "Oh well he was destined for failure." Even if that is the case, do you think not taking the time to change something you are doing wrong is a bad thing?
The more teachers can look for faults within themselves the better teacher they will become.
I also think that some of the weakness of the ways ratings (including smiley faces) are calculated means we also have to look externally--i.e., at the students and at the flawed ratings site. Our course evaluations have a nearly 100% participation rate and are substantive, so I'm lucky in that I get to actually use feedback on an actual class to help me look internally at how to improve instead of at random, vague comments on RMP.com.
The irony here is that Dean Dad is correct. Taken in the aggregate, more times than not, RMP comments give a pretty accurate picture of who's doing what in the classroom. The problem, of course, is distinguishing between real comments and fake ones and between disgruntled students with an ax to grind and students with real issues and concerns.
I think it's more important to assume as a professor I made a mistake, and try to find ways to improve how I teach the class.
I do check RMP occasionally and I think if I saw multiple comments that had something to do with my actual teaching such as "she never turns papers back on time" I'd probably look deeply inward (and also at course evals to see if the comments matched RMP) to determine if I should make some changes. But generally the occasional negative feedback I've gotten has been unhelpful. A comment such as "don't take this course" doesn't assist potential students or myself which very frustrating since I'm a big fan of knowing how I can improve as an instructor.
It became readily apparent many of them did no reading, didn't familiarize themselves with syllabus policies (despite my reinforcement of them and reminders for them to re-read the syllabus), and failed to follow assignment instructions.
While not every student did these things, enough of them did it to make it a horrendous teaching (and learning) experience. Considering the same comments appeared at midterm on RMP that I got now at end-of-term, I am pretty sure it's the students' fault and not-so-much mine.
How often am I supposed to remind students to double-space their papers and run a spellcheck (at minimum) before submission? I thought 5 was more than sufficient.
I definitely agree that the comments, as versus the ratings, are where it's at.
In view of a rather large number of students this past year who have personally told me that they thought I was doing a very good job, I find this one evaluation rather discouraging. It is not in line with my official evaluations--far from it--and yet there it is, tainting my reputation. Now, ironically, I'm hoping that some of my other students will post evaluations that are more honest and realistic. Odd, isn't it, when I dislike RMP so intensely?
I also find it discouraging that so many incoming freshmen fully expect to be handed a high grade on a silver platter (a la high school, I suppose) and don't seem to understand that they actually have to perform very well to get an A or a B. The difference between high school and university seems to be lost on them, despite my warnings that my course, and university in general, will not be a cakewalk. This attitude is pervasive and doesn't just manifest itself on RMP, but I find it discouraging nonetheless.
Not to mention that the whole "hotness" (or hottness, as I saw it spelled on RMP) thing is very juvenile. Many of my own favorite professors have been very average in the looks department, and many of them are just plain odd-looking. A few have been drop-dead gorgeous. Does this have any bearing on what I think of their teaching? Not that I am aware of. This aspect of RMP only indicates how shallow people can be. We're talking about college here, not the Mr. Universe competition or the Miss America pageant. Sheesh!
I like Sarah's approach. A "member check-in" like this, even with (or precisely with) a class other than the one posting the ratings would often, I think, help to elucidate the meaning of various statements and the general trend, or at least identify some possibilities.
Some comments, like "doesn't return tests for weeks" or "really fun lectures that keep you awake," are easy. It's the sweeping and nonspecific "he sucks" or "she rocks" that have to be unpacked to get any cues for improvement.
And then there's the faculty who get *no* ratings, or ratings but no comments, versus those who get dozens. What does that mean?
I suspect 'viral networking' in action. RMP, at least at my school, is like FaceBook -- popular among specific circles, nearly unknown beyond that. In the case of my school, I can tell from FaceBook, and infer from the courses that get high RMP comments, that it's mainly frat and sorority choosing to do it, and no doubt also applying an equally narrow set of values.
Of course it's juvenile. Of course it's irrelevant as to lecturer choice. It's also amusing, and the act of including an irrelevant amusing bit loosens up commenters, IMHO.
However, despite getting mostly positive teaching reviews when I taught seminars as a TA, I was petrified that the few that clearly disliked me (and includes ridiculous statements on their formal evaluation forms like I rarely showed up for office hours when I never missed one) would start a link for me on that website.