Friday, May 18, 2007

 

The Paradox of RateMyProfessors.com

A reader wrote to ask a dean's-eye perspective on ratemyprofessors.com.

The short version: I consider it electronic gossip.

The slightly longer version: I've had enough social science training to know that small, self-selected samples lead to skewed outcomes.

The long version: and yet, from what I've seen, the ratings usually aren't that far off. Which is fascinating, given that the methodology is so bad.

I've never used it in anybody's evaluation, and won't. It's too easy to game, it uses criteria I don't recognize as valid measures of teaching performance (easiness of grading and “hotness”), and it can give far too much weight to the occasional disgruntled (or infatuated) student. I make a point of not even looking at it for people who are up for evaluation in a given year.

And yet, the few times I've checked it, the ratings have mostly been right. Comments posted alongside the ratings have often been scarily right. I'm fascinated by the paradox of an awful method leading to pretty good results.

Anybody who has read through teaching evaluations in bulk – and heaven help me, I have – can tell you that most students pay little heed to the individual questions or categories. If they liked the professor, they rate her high in everything; if they didn't, they rate her low in everything. It's not uncommon to see a line drawn down one column of twenty questions, giving the same answer to all twenty. (A blessed few students actually take the time and effort to disaggregate the questions. These are much more helpful. I recall one professor who got awful ratings in everything except something like “intellectually challenging,” where he was off the charts. Essentially, the students found him brilliant but incomprehensible.)

I think something similar happens with ratemyprofessors. Although it asks about easiness and hotness, a quick read of the comments suggests that the ratings really reflect a more general overall thumbs-up or thumbs-down. (The chili pepper is another issue.) The students who post to it actually make the tool smarter than it's designed to be. The comments, read in bulk, seem to suggest that students like clarity of presentation, clarity of grading criteria, a sense of humor, and a grounded ego. Honestly, so do I. They get grumpy at professors who take a month to return graded assignments. Honestly, so do I.

I've noticed, too, that they're pretty willing to 'forgive' harsh grading, if they believe that the harsh grading reflects high – as opposed to arbitrary – standards. And they loathe professors who spend significant class time on digressions about their personal lives. That strikes me as reasonable.

It's true that some professors have 'rock star' charisma and can get away with liberties that most of us couldn't, but I'm struck by the respect most students have for good teachers. I'm especially heartened by comments like “I busted my ass and still only got a B, but I never learned more.” That gives me hope. And I don't think it's out of bounds for a student to complain that he didn't get his first graded assignment back until Thanksgiving. Especially at the intro level, relatively frequent and prompt feedback makes a real difference.

I don't use rmp, but I'd recommend it at useful reading for instructors looking to get a reality check as to what students value. Just don't take the categories too literally; read the comments instead.



Comments:
What you're saying only works if there are a reasonable number of comments. But if there are only a few, odds are that they're either extremely complementary, or extremely nasty, and often neither of them are true. How can it be explained that of my only 4 comments, the first 2 are along the lines of "I love this woman", and the last two are "She's a beast, don't take this class"? What the hell am I supposed to make of that?
 
How reliable can it be if literally anyone (including the professors themselves and their friends) can make comments that seem to be from professors?
 
Whoops. I meant "seem to be from students," not "seem to be from professors."
 
Anonymous: do you really think a high or decently high majority of comments are going to come from non-students. I highly doubt it. Why would people waste their time going to RMP.com to skew results? You might get maybe 1% of all comments are bogus, but I doubt that even that many are bogus. People have better things to do with their time.

Coming from a teacher and student perspective I think if there is anything that RMP.com shows us, it's that as teachers we need to be more inwardly focused. Stop blaming the department or students and find something that you are doing and change it. If a student fails stop with this whole BS attitude of, "Oh well he was destined for failure." Even if that is the case, do you think not taking the time to change something you are doing wrong is a bad thing?

The more teachers can look for faults within themselves the better teacher they will become.
 
Webs, anonymous here. The percentage of bogus comments really depends on the total number of comments. At a small school like mine, it's treated like a joke by faculty (and even staff), who add funny comments to their freinds' listings. I think about 25%+ of mine are fake.

I also think that some of the weakness of the ways ratings (including smiley faces) are calculated means we also have to look externally--i.e., at the students and at the flawed ratings site. Our course evaluations have a nearly 100% participation rate and are substantive, so I'm lucky in that I get to actually use feedback on an actual class to help me look internally at how to improve instead of at random, vague comments on RMP.com.
 
Sample size does matter, Ianqui. On average, however, sufficiently large emergent distributed networks tend to get things right.
 
Unless you subscribe to RMP, you can view only the last few comments--which further skews the "data." And I do need to put data in quotation marks because I know for a fact that sometimes jealous/catty/whatever faculty members write negative comments about their percieved rivals.

The irony here is that Dean Dad is correct. Taken in the aggregate, more times than not, RMP comments give a pretty accurate picture of who's doing what in the classroom. The problem, of course, is distinguishing between real comments and fake ones and between disgruntled students with an ax to grind and students with real issues and concerns.

--Philip
 
Of course looking inwardly for problems of student success needs to use data drawn from outside sources. I was simply making a blanket statement that some professors tend to look at the students and assume it's their fault when the student fails.

I think it's more important to assume as a professor I made a mistake, and try to find ways to improve how I teach the class.
 
While there is an official metric by which faculty can measure their own strengths and weaknesses, there does not exist an official metric by which students can measure the long-term "success rate" of the professors they whose 12-16 week courses they choose to sign up for. While "RateMyProfessors" is most certainly an inappropriate tool of evaluation for fellow faculty and administration, it does go a little way empowering students to have some control over their academic choices. Let's face it--an outstanding professor can be a life-altering experience.
 
This comment is a bit off to the side but I thought it might be interesting to note that I actually had my students write about Ratemyprofessor and Turnitin in an essay on academic technology and modes of identity performance in the 21st century. Speaking from the arguments my students made and the discussions that they had, it seemed like they basically understood the ways that RMP presents skewed "data." But they still like writing in comments and reading them too.

I do check RMP occasionally and I think if I saw multiple comments that had something to do with my actual teaching such as "she never turns papers back on time" I'd probably look deeply inward (and also at course evals to see if the comments matched RMP) to determine if I should make some changes. But generally the occasional negative feedback I've gotten has been unhelpful. A comment such as "don't take this course" doesn't assist potential students or myself which very frustrating since I'm a big fan of knowing how I can improve as an instructor.

Sarah
 
I was recently skewered on RMP by my most recent batch of students BEFORE THE END OF THE SEMESTER.

It became readily apparent many of them did no reading, didn't familiarize themselves with syllabus policies (despite my reinforcement of them and reminders for them to re-read the syllabus), and failed to follow assignment instructions.

While not every student did these things, enough of them did it to make it a horrendous teaching (and learning) experience. Considering the same comments appeared at midterm on RMP that I got now at end-of-term, I am pretty sure it's the students' fault and not-so-much mine.

How often am I supposed to remind students to double-space their papers and run a spellcheck (at minimum) before submission? I thought 5 was more than sufficient.
 
I think the reason RMP works is the same reason Amazon ratings basically work. When someone is in the mood to give his or her opinion, and you listen, you get a pretty good opinion.

I definitely agree that the comments, as versus the ratings, are where it's at.
 
I think that RMP is a mixed bag, and I really don't like it. To use it as a professional rating tool--which, I understand, a few schools actually do--is irresponsible. All the same, I do check in every once in a while. I was out of teaching for a couple of years, so I guess my old evaluations went away. I now have only one rating, quite a low one. Students are, of course, entitled to their opinions, but this student is not fully consistent or honest. For example, a very low number in one category is contradicted by the student's positive comments about that aspect of my teaching. Also, the student misrepresented the workload of the class, and I apparently have no way, short of posing as a student myself and making my own comment, to correct this fallacious statement.

In view of a rather large number of students this past year who have personally told me that they thought I was doing a very good job, I find this one evaluation rather discouraging. It is not in line with my official evaluations--far from it--and yet there it is, tainting my reputation. Now, ironically, I'm hoping that some of my other students will post evaluations that are more honest and realistic. Odd, isn't it, when I dislike RMP so intensely?

I also find it discouraging that so many incoming freshmen fully expect to be handed a high grade on a silver platter (a la high school, I suppose) and don't seem to understand that they actually have to perform very well to get an A or a B. The difference between high school and university seems to be lost on them, despite my warnings that my course, and university in general, will not be a cakewalk. This attitude is pervasive and doesn't just manifest itself on RMP, but I find it discouraging nonetheless.

Not to mention that the whole "hotness" (or hottness, as I saw it spelled on RMP) thing is very juvenile. Many of my own favorite professors have been very average in the looks department, and many of them are just plain odd-looking. A few have been drop-dead gorgeous. Does this have any bearing on what I think of their teaching? Not that I am aware of. This aspect of RMP only indicates how shallow people can be. We're talking about college here, not the Mr. Universe competition or the Miss America pageant. Sheesh!
 
I agree that RMP has a line on "truth," or at least strong feelings. The problem is trying to translate the very subjective comments into terms that are meaningful to an instructor and thus able to be used for positive change. RMP is qualitative data - skewed, yes, but skews can be defined. "Hate" and "love" are undetailed, but at least the directionality of the response is clear.

I like Sarah's approach. A "member check-in" like this, even with (or precisely with) a class other than the one posting the ratings would often, I think, help to elucidate the meaning of various statements and the general trend, or at least identify some possibilities.

Some comments, like "doesn't return tests for weeks" or "really fun lectures that keep you awake," are easy. It's the sweeping and nonspecific "he sucks" or "she rocks" that have to be unpacked to get any cues for improvement.

And then there's the faculty who get *no* ratings, or ratings but no comments, versus those who get dozens. What does that mean?

I suspect 'viral networking' in action. RMP, at least at my school, is like FaceBook -- popular among specific circles, nearly unknown beyond that. In the case of my school, I can tell from FaceBook, and infer from the courses that get high RMP comments, that it's mainly frat and sorority choosing to do it, and no doubt also applying an equally narrow set of values.
 
Whatever happened to word-of-mouth? That's how I received information about the quality of a professor back in the days. When I started teaching a couple of years ago as a visiting assistant professor, my first two weeks were very shaky. Immediately I obtained the comment on RMP that "teaching is not her thing". That certainly didn't boost my confidence level in front of the students. At the same time I also applied for tenure-track positions, and the comments on RMP worried me. Meanwhile my ratings have improved and I was offered four tenure-track positions, but I am nevertheless not thrilled by the fact that all my friends and foes around the country know whether my students find me hot or my labs tedious. I have promised myself and my best friend that I will never consult RMP again once I begin my new position.
 
Re: hot or not.

Of course it's juvenile. Of course it's irrelevant as to lecturer choice. It's also amusing, and the act of including an irrelevant amusing bit loosens up commenters, IMHO.
 
I've gamed the system, myself. A dear friend was mock-distraught that she didn't have any chili peppers. Sometimes, you do what you've gotta do.
 
From my undergrad days, I understand the reasoning of having some sort of tool to help you make the best choice for where your tuition money is going.
However, despite getting mostly positive teaching reviews when I taught seminars as a TA, I was petrified that the few that clearly disliked me (and includes ridiculous statements on their formal evaluation forms like I rarely showed up for office hours when I never missed one) would start a link for me on that website.
 
If professors don't like being rated on RateMyProfessors.com then they should just post their syllabus on SyllabusCentral.com and allow students to make up their own decision on the course while registering for classes. A major reason why students use RateMyProfessors.com is because they don't have any other resources to use and they aren't given the syllabus until the first day of class. Professors should be proactive and use SyllabusCentral to connect with prospective students and showcasing their course.
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?