Thursday, December 16, 2010
How to Read Student Evaluations
In a phrase: look for outliers. It’s really about spotting the folks who are badly trailing the rest of the pack. Putting much weight on the difference between the lower middle and the upper middle is missing the point. There’s considerable normal variation, and all kinds of irrelevancies can push one instructor slightly above or below another. But when the same few names show up at the bottom of the list semester after semester, it’s difficult to write that off to random variations.
That’s where comments are useful. Some comments suggest ideological or cultural antipathy at work; those discredit themselves. (About once a year I get a student complaining that his professor is gay, and wanting to know what I’m going to do about it. “What would you suggest?” usually ends the discussion.) But some comments are actually revealing. I tend to discount references to “arrogance” or “full of himself,” but I take seriously comments like “he takes two months to grade papers” or “he’s incredibly disorganized.” When clusters of students make the same basic comment, there’s usually at least a conversation to be had.
Some professors like to say that student evaluations shouldn’t exist, or at least shouldn’t count for anything. I have to disagree. When a dean does a class observation, she observes one class meeting. Things like “speed of grading” simply won’t show on the radar, and of course, anyone can have an uncharacteristically good or bad day. But students see every day, so things that might seem inconsequential (or be entirely invisible) in a single moment take on their full significance.
Students also have different ‘eyes’ than faculty peers or deans, and reaching them is really the point. Inferential leaps that may seem obvious to someone with a doctorate in a related field may be entirely opaque to a freshman encountering the subject for the first time. It’s hard to fake ignorance, so we need to ask those who don’t have to fake it.
That said, I’m often struck at faculty paranoia around student course evaluations. They’re part of the picture, but they’re far from dispositive. In my student days at Snooty Liberal Arts College I remember a young professor -- maybe second year -- handing out the evaluation forms and then just sitting there and staring at us as we filled them out. He seemed paralyzed with fear that we’d be lukewarm and get him fired. After an uncomfortable silence, we started filling out the forms, wondering when/if he would leave. Another student -- I can’t take credit, though I wish I could -- raised his hand and asked “how many m’s in ‘incompetent?’” It broke the tension, even if it seemed just this side of cruel.
Of course, most student comments aren’t quite so clever. I don’t know why students feel compelled to comment on professorial hotness, and I wince whenever I read something like “he helped me write more better.” (I actually got that one once.) My brother reports that he once had a professor in the later stages of his career, perfectly fine in class but long past caring about evaluations. The students decided collectively to write their comments as baseball metaphors. “Although he’s lost a little on his fastball, he makes up for it by painting the corners.” Okaayyyy....
Wise and worldly readers, have you ever read anything on a student course evaluation that stuck with you? Is there a right way to read these things?
My student population is a non-traditional one, and so we don't seem to contend as much with some of the silliness in student evaluations that you described. However, I can attest to the fact that students are oftentimes afraid to complete their student evaluations or to be honest in them. They are fearful that faculty will find out what they said and use it as an excuse to grade them poorly. While I regularly reassure them that the student evaluation process is specifically designed so that the information is aggregated and kept anonymous, the fear still exists.
Student evaluations are a valuable feedback mechanism, though as you caution, the feedback garnered from them has to be interpreted carefully. Both faculty and students need to be encouraged to make the most of this useful tool and to not fear it.
Normally, my evals are great. I did feel pretty nervous that first time though.
Strangest was one student who said I wasn't fair in my mid-term eval and graded her down. I can only give pass or fail at the end of the year (she passed) and the mid-term eval is just my conversation with her site supervisor and none of my own comments. Really irritated me.
evaluations need questions that are geared in such a way, so that the comment sections don't need to be so brutal.
working in the private sector, when i get a call from another company about an ex worker, my HR department told me i can only state 2 things: if person X worked here, and if i would hire them again.
that last part is key. would students take that class again?
my wife is a public school teacher. her principle sits in on a full class at least twice a year to evaluate her. i never once saw a dean sit in on a class to evaluate a professor. why not get a first hand look for yourself?
my biggest complaint (as an engineer) was always along the lines of "Does not possess the ability to speak English clearly." i was/am amazed that fluent English is not a requirement. it was even harder for my friends who had english as a second language.
I think another problem with evaluations that I feel is specific to the questions on them at my institution - questions which ask students to evaluate things about the professor that have nothing to do with the content of the course or what they may or may not actually have learned. (Questions that ask students to evaluate how smart they thought the professor is, how likable the professor is, etc. Yes, women and faculty of color get much lower evaluation scores and much more offensive comments, semester in and semester out, on these questions, and yes, most of us stop looking at our evaluations at a certain point because they make you feel completely disgusting about yourself and about your students. When I and others have brought up the problem, we've been told by well-meaning administrators and d00ds on the faculty who've been around for 20+ years that they "take that into account" and "not to worry." Ugh.) While what I'm describing is specific to my institution, I know there are other institutions out there where this is a problem, too.
At any rate, the result for me is that I have stopped, since tenure, reading my student evaluations. (I always get good evals by the numbers - but the comments which curse at me, comment on my physical appearance, express anger at my intellect, etc., make me want to vomit and make me hate my students, which is not good for my teaching.) And when I want student feedback that will actually help with my teaching, I come up with my own questions for students to answer or I - god forbid - talk to my students.
The research behind what they are saying is pretty impressive. Sadly this scholarship rarely makes into the actual evaluation forms (and how to read them). It is a science but we let amateurs (i.e. faculty & administrators together) carry it out.
No faculty member should have to fear that their job hinges on student evaluations, and no students should think they have that kind of power. But many administrators need a good talking-to about how to use them properly.
We know that there are lots of ways profs can 'juke' the stats: Dress nicely, show videos, always post notes ahead of time, tell jokes, belittle themselves, easy grades. Fixing grades, in particular is so easy to manipulate (and probably the best predictor of your evaluation). Even when the course requires a certain grade range, simply by making the midterm and assignments (before the evaluation) easy and the final paper and exams (after the evaluation) hard.
Dean Dad's point about outliers is true though, in 10 years of looking at these things, it's always the same people at the very top, and the same people on the very bottom. We ask the faculty at the top to run workshops and push the faculty at the bottom to go to them.
In such a case, anxiety (not "paranoia"--I wish such terms would not be used so casually) can perhaps be understood.
There is a huge jump in expectations between high school and college writing and reading in our community, and they take that shift out on the instructor when they should be complaining to their school district for only expecting e minimum from them.
Honestly, I don't see how you can expect students to fill out evaluations in a meaningful way if you don't explain to them throughout the semester what kinds of feedback are useful. I would do a short seminar on constructive criticism and have the students fill out evaluations for me and for each other throughout the term so that by the end of the semester, they had a lot of practice giving feedback.
I also did things that had nothing to do with my teaching that raised my scores. I started making a practice of sharing the average scores for the department and the college with my students because I didn't think they understood how their "average" rating was hurting me and the other two instructors in our course (the mean rating for the college was "excellent" – two points above “average”.) I discussed very bluntly with the students that our evaluations from our chair were influenced by student evaluations and that our bonus and pay were determined in part by student evaluations. I started doing midterm teaching evaluations that were for my own use. Even if I did not act on any of the suggestions in those evals, my scores went up on the final evals.
All of this made me very cynical about student evals. I wanted to know what was helping my students learn in the course. But my observation was that my evaluations seemed to have more to do with my student's perception that I "cared" about them than whether or not they actually learned the material. I think this is the key flaw in evaluations – students judge faculty not just on how much they the students learn but on how “hard” the course was or on how they feel about the professor. The student’s goals (to get through the course and get their degree, to work – but not too hard, to get a good grade) and the goals of the faculty and college (meet curricular objectives, meet assessment goals) are in conflict.
About 1/3 of students fudge their faculty evaluations. Some give higher ratings to teachers they like, even more give lower ratings to teachers the don't like.
I would hope they don't. They are too easy to spoof — there's no check that the person making the rating actually took the course. I've seen several online campaigns to trash someone's rating, and I know of at least two cases where the number of ratings was at least an order of magnitude higher than the number of students enrolled in the professor's course.
Yeah, right. Tell that to the dean and chair who dragged me in for a 75 minute meeting on the basis of little more than the numeric average of my evaluations being on the wrong side of "very good" by a tenth of a point, neither never having set foot in my classroom in several years' employment. And neither have set foot in there since, despite an "improve or else" directive.
I have a lot of students and they are writing for me every single class day of the semester: so, three days of class per week and three days of reading and grading. Everything has to be read by the following class period because if I dogged off, I quite literally could not squeeze two days worth of reading into a single day. No one has ever waited more than 48 hours (72 on weekends) for assignment turnaround.
It's a lot of work and a pain in the ass, but I don't think writing students can afford to spend a whole semester polishing a few pieces to perfection. They have to write a lot, and I have to read a lot.
So, the only student evaluation that ever pissed me off was a student checking off the box next to: 'Fails to return papers quickly.'
The student signed his evaluation--and it's true that in his case I failed to return his papers quickly. But that's because he would regularly skip several classes in a row! I could hardly return his frickin papers if he wasn't there!
Another thing: students need to be told what their evals will be used for. They've spent much of their life filling out responses in all sorts of contexts (not necessarily educational ones), most of which disappeared into a black hole as far as they were concerned.
It's not unreasonable for them to have no clue that this isn't just a piece of pointless bureaucracy that will go into a filing cabinet somewhere and be forgotten. Faculty are paranoid (well, untenured faculty) because they are aware that their career may turn on something to which the students may quite reasonably give very little thought - especially when it's the fifth evaluation that the student has had to fill out that week, and s/he is desperate for the semester just to be over.
Why would anyone sacrifice their career in order to keep their grading average .3 lower? Or not to bring in cookies once a week for the three weeks leading up to evals?
You don't have to sacrifice an iota of teaching quality. Juke the evals and if you get a chance to change the culture, do so.
Do armies poll their basic trainees to see if their drill sergeants are going a good job? FUCKE NO!
To what extent can questions be written to distinguish between students who might give low scores unfairly (e.g., because s/he didn't want to have to work hard for an A) vs. those who have useful feedback to offer? I think it's good that my school asks separate questions about a professor's command of subject matter, preparedness for class, and ability to engage with students; unless a professor is a total wreck, you can expect a student who's being honest and thoughtful to give low marks for one or two of those areas but not all three.
I also appreciate that the form specifically asks whether the professor "starts and dismisses class according to scheduled meeting time"; I had one class in which the professor regularly ran over, and I appreciated the chance to point out this one problem while praising his expertise and engaging classroom manner in other questions.
Student evals only have one important part - the free-form comment box. The rating category questions are only there to jog someone's memory when they're put on the spot on eval day.
Have a 1-5 rating system, but require any 1s or 5s to be substantiated by a comment. Now you can scan the comments for any legitimate gripes.
The numbers, well they're useful in one way. The teachers who are too easy, everyone's buddy, afraid to grade strictly, will get a perfect 4.0 average. That's because they attract the kind of student who puts no effort into the eval and will just make a line of 4s down the page.
The GOOD instructors would start out at 2s and 3s until the students got used to the expectations and saw that they would be held to them. They would rise to 4s and 5s as those students became proud of not getting a free ride and saw the results of being driven harder.
Unfortunately, most students will not repeat the same instructors that were hard on them, given the choice, so that instructor is just going to keep getting a bunch of fresh 2s and 3s from the new crop.
The fact is that tutorials start on time EVERY day, so WTF am I supposed to do with a 3.76 average to that question?
"On a scale from 1 to 5, was the distribution of grades reasonable relative to other courses?"
Yo, Admin lackey! A few things to consider: (a) the students don't know what the distribution of marks were in this course; (b) they don't know what the distribution of marks are in other courses; and (c) it's not the TA's responsibility to determine the distribution of marks in any course. That last part's in our contract.
This is what passes for helpful professional feedback these days. (sigh)
It seems that knowledge acquisition is not part of course evaluations. I am asking these questions at the end of the class in a blackboard discussion board.