Friday, May 18, 2007

 

The Paradox of RateMyProfessors.com

A reader wrote to ask a dean's-eye perspective on ratemyprofessors.com.

The short version: I consider it electronic gossip.

The slightly longer version: I've had enough social science training to know that small, self-selected samples lead to skewed outcomes.

The long version: and yet, from what I've seen, the ratings usually aren't that far off. Which is fascinating, given that the methodology is so bad.

I've never used it in anybody's evaluation, and won't. It's too easy to game, it uses criteria I don't recognize as valid measures of teaching performance (easiness of grading and “hotness”), and it can give far too much weight to the occasional disgruntled (or infatuated) student. I make a point of not even looking at it for people who are up for evaluation in a given year.

And yet, the few times I've checked it, the ratings have mostly been right. Comments posted alongside the ratings have often been scarily right. I'm fascinated by the paradox of an awful method leading to pretty good results.

Anybody who has read through teaching evaluations in bulk – and heaven help me, I have – can tell you that most students pay little heed to the individual questions or categories. If they liked the professor, they rate her high in everything; if they didn't, they rate her low in everything. It's not uncommon to see a line drawn down one column of twenty questions, giving the same answer to all twenty. (A blessed few students actually take the time and effort to disaggregate the questions. These are much more helpful. I recall one professor who got awful ratings in everything except something like “intellectually challenging,” where he was off the charts. Essentially, the students found him brilliant but incomprehensible.)

I think something similar happens with ratemyprofessors. Although it asks about easiness and hotness, a quick read of the comments suggests that the ratings really reflect a more general overall thumbs-up or thumbs-down. (The chili pepper is another issue.) The students who post to it actually make the tool smarter than it's designed to be. The comments, read in bulk, seem to suggest that students like clarity of presentation, clarity of grading criteria, a sense of humor, and a grounded ego. Honestly, so do I. They get grumpy at professors who take a month to return graded assignments. Honestly, so do I.

I've noticed, too, that they're pretty willing to 'forgive' harsh grading, if they believe that the harsh grading reflects high – as opposed to arbitrary – standards. And they loathe professors who spend significant class time on digressions about their personal lives. That strikes me as reasonable.

It's true that some professors have 'rock star' charisma and can get away with liberties that most of us couldn't, but I'm struck by the respect most students have for good teachers. I'm especially heartened by comments like “I busted my ass and still only got a B, but I never learned more.” That gives me hope. And I don't think it's out of bounds for a student to complain that he didn't get his first graded assignment back until Thanksgiving. Especially at the intro level, relatively frequent and prompt feedback makes a real difference.

I don't use rmp, but I'd recommend it at useful reading for instructors looking to get a reality check as to what students value. Just don't take the categories too literally; read the comments instead.





<< Home

This page is powered by Blogger. Isn't yours?