Thursday, December 16, 2010

 

How to Read Student Evaluations

‘Tis the season for student evaluations of their instructors, so I thought I’d share some thoughts on how best for administrators to read them.

In a phrase: look for outliers. It’s really about spotting the folks who are badly trailing the rest of the pack. Putting much weight on the difference between the lower middle and the upper middle is missing the point. There’s considerable normal variation, and all kinds of irrelevancies can push one instructor slightly above or below another. But when the same few names show up at the bottom of the list semester after semester, it’s difficult to write that off to random variations.

That’s where comments are useful. Some comments suggest ideological or cultural antipathy at work; those discredit themselves. (About once a year I get a student complaining that his professor is gay, and wanting to know what I’m going to do about it. “What would you suggest?” usually ends the discussion.) But some comments are actually revealing. I tend to discount references to “arrogance” or “full of himself,” but I take seriously comments like “he takes two months to grade papers” or “he’s incredibly disorganized.” When clusters of students make the same basic comment, there’s usually at least a conversation to be had.

Some professors like to say that student evaluations shouldn’t exist, or at least shouldn’t count for anything. I have to disagree. When a dean does a class observation, she observes one class meeting. Things like “speed of grading” simply won’t show on the radar, and of course, anyone can have an uncharacteristically good or bad day. But students see every day, so things that might seem inconsequential (or be entirely invisible) in a single moment take on their full significance.

Students also have different ‘eyes’ than faculty peers or deans, and reaching them is really the point. Inferential leaps that may seem obvious to someone with a doctorate in a related field may be entirely opaque to a freshman encountering the subject for the first time. It’s hard to fake ignorance, so we need to ask those who don’t have to fake it.

That said, I’m often struck at faculty paranoia around student course evaluations. They’re part of the picture, but they’re far from dispositive. In my student days at Snooty Liberal Arts College I remember a young professor -- maybe second year -- handing out the evaluation forms and then just sitting there and staring at us as we filled them out. He seemed paralyzed with fear that we’d be lukewarm and get him fired. After an uncomfortable silence, we started filling out the forms, wondering when/if he would leave. Another student -- I can’t take credit, though I wish I could -- raised his hand and asked “how many m’s in ‘incompetent?’” It broke the tension, even if it seemed just this side of cruel.

Of course, most student comments aren’t quite so clever. I don’t know why students feel compelled to comment on professorial hotness, and I wince whenever I read something like “he helped me write more better.” (I actually got that one once.) My brother reports that he once had a professor in the later stages of his career, perfectly fine in class but long past caring about evaluations. The students decided collectively to write their comments as baseball metaphors. “Although he’s lost a little on his fastball, he makes up for it by painting the corners.” Okaayyyy....

Wise and worldly readers, have you ever read anything on a student course evaluation that stuck with you? Is there a right way to read these things?

Comments:
I'm not a faculty member, so I can't speak to the pressure faculty must feel with regard to student evaluations. As an Academic Advisor, however, and one at a satellite campus of my university where faculty are not observed by deans or department chairs, student course evaluations are a critical tool for the university to assess how well things are going in the classroom. I regularly encourage my students to take the time to fill them out, whether positive or negative, as it is the only significant way that the university is able to get feedback on faculty and student experiences.

My student population is a non-traditional one, and so we don't seem to contend as much with some of the silliness in student evaluations that you described. However, I can attest to the fact that students are oftentimes afraid to complete their student evaluations or to be honest in them. They are fearful that faculty will find out what they said and use it as an excuse to grade them poorly. While I regularly reassure them that the student evaluation process is specifically designed so that the information is aggregated and kept anonymous, the fear still exists.

Student evaluations are a valuable feedback mechanism, though as you caution, the feedback garnered from them has to be interpreted carefully. Both faculty and students need to be encouraged to make the most of this useful tool and to not fear it.
 
A question out of curiousity--do administrators also read the online evaluations/ratings systems (e.g. "ratemyprofessor.com") for professors? Or do only those collected in a strictly controlled environment matter?
 
I "teach" and I use that term loosely, a 1 credit internship course. I supervise 4 or 5 seniors in a practicum setting.

Normally, my evals are great. I did feel pretty nervous that first time though.

Strangest was one student who said I wasn't fair in my mid-term eval and graded her down. I can only give pass or fail at the end of the year (she passed) and the mid-term eval is just my conversation with her site supervisor and none of my own comments. Really irritated me.
 
i always struggled with student evalutations, because most of my profs were good people, and i didn't want to get them fired, no matter how bad they were (i wouldn't want someone trying to fire me!).

evaluations need questions that are geared in such a way, so that the comment sections don't need to be so brutal.

working in the private sector, when i get a call from another company about an ex worker, my HR department told me i can only state 2 things: if person X worked here, and if i would hire them again.

that last part is key. would students take that class again?

my wife is a public school teacher. her principle sits in on a full class at least twice a year to evaluate her. i never once saw a dean sit in on a class to evaluate a professor. why not get a first hand look for yourself?

my biggest complaint (as an engineer) was always along the lines of "Does not possess the ability to speak English clearly." i was/am amazed that fluent English is not a requirement. it was even harder for my friends who had english as a second language.
 
*principal*
 
I think faculty feel "paranoia" about evaluations when they see no evidence that anyone is looking at the whole picture of their teaching beyond those evaluations. If you never get comments on any portion of your teaching but student evaluations - which is really common across a range of institution types, even those who call themselves teaching-oriented - then you ascribe more weight to them than is probably good.

I think another problem with evaluations that I feel is specific to the questions on them at my institution - questions which ask students to evaluate things about the professor that have nothing to do with the content of the course or what they may or may not actually have learned. (Questions that ask students to evaluate how smart they thought the professor is, how likable the professor is, etc. Yes, women and faculty of color get much lower evaluation scores and much more offensive comments, semester in and semester out, on these questions, and yes, most of us stop looking at our evaluations at a certain point because they make you feel completely disgusting about yourself and about your students. When I and others have brought up the problem, we've been told by well-meaning administrators and d00ds on the faculty who've been around for 20+ years that they "take that into account" and "not to worry." Ugh.) While what I'm describing is specific to my institution, I know there are other institutions out there where this is a problem, too.

At any rate, the result for me is that I have stopped, since tenure, reading my student evaluations. (I always get good evals by the numbers - but the comments which curse at me, comment on my physical appearance, express anger at my intellect, etc., make me want to vomit and make me hate my students, which is not good for my teaching.) And when I want student feedback that will actually help with my teaching, I come up with my own questions for students to answer or I - god forbid - talk to my students.
 
Students should be offered the opportunity to evaluate faculty, and they should know that their evaluations will be taken seriously. That said, everyone charged with actually evaluating faculty teaching performance - peers, chairs, deans - should understand that, beyond helping them catch real problems, student evaluations in aggregate are just about useless.
 
Should the comments be used to evaluate? As Dean Dad notes it is subjective in how you read them (dismissing some and not others). At my campus, evaluations are being questioned. Evaluation experts are tearing apart our evaluation forms. Too many open ended comment sections, not validated, using comments to evaluation faculty, etc.

The research behind what they are saying is pretty impressive. Sadly this scholarship rarely makes into the actual evaluation forms (and how to read them). It is a science but we let amateurs (i.e. faculty & administrators together) carry it out.
 
That said, student evaluations of faculty do, indeed, provide information that nothing else can, and if used wisely can help faculty improve and help administration decide, as DD says, who needs a good talking-to. But only if used in the aggregate over many samples. Individual responses, even individual sections or whole academic years' worth of results mean little in isolation. These are statistical measures (even the written comments) and must be used statistically. This is why online comments like those on Ratemyprofessor are meaningless and dangerous. Not because they are anonymous -- in-class evals are too, as is, by the way, every comment on this blog -- but because there are too few of them to provide a meaningful sample.

No faculty member should have to fear that their job hinges on student evaluations, and no students should think they have that kind of power. But many administrators need a good talking-to about how to use them properly.
 
The challenge, in my mind, is understanding that evaluations are easily influenced by external factors...

We know that there are lots of ways profs can 'juke' the stats: Dress nicely, show videos, always post notes ahead of time, tell jokes, belittle themselves, easy grades. Fixing grades, in particular is so easy to manipulate (and probably the best predictor of your evaluation). Even when the course requires a certain grade range, simply by making the midterm and assignments (before the evaluation) easy and the final paper and exams (after the evaluation) hard.
 
@Chosen Folks--I agree that asking students to evaluate elements that faculty can't control is worse than useless. We got around that by asking faculty to write the student evaluation form and asking them to focus on things students can evaluate (shows up on time, teaches to the end of the period, as basic examples). The other check the faculty put in place was to ask the student to evaluate themselves (attendance percentage, doing homework, keeping up with the readings, and grade that they feel they've earned) When all the students evaluate themselves, like Mary Poppins, as practically perfect in every way, you take the rest of evaluation with a grain of salt.
Dean Dad's point about outliers is true though, in 10 years of looking at these things, it's always the same people at the very top, and the same people on the very bottom. We ask the faculty at the top to run workshops and push the faculty at the bottom to go to them.
 
At my "teaching-oriented" SLAC, the only comments on my teaching that have played a role in my yearly evaluation have come from student evaluations. I.e., in my annual review letter, my dean has commented in the teaching section only on the results of these evaluations. Nothing on peer reviews, nothing on load, nothing on number of preps, nothing on new preps, nothing on overloads, nothing on personal reflections on teaching. Such letters determine merit raises (if the money's there for them, which it hasn't been for a while) and awards ($$$) for teaching, and they play a large role in mid-tenure and tenure/promotion reviews.

In such a case, anxiety (not "paranoia"--I wish such terms would not be used so casually) can perhaps be understood.
 
I love student evils, but my frustration stems from the fact that my freshman writers will not learn and recognize the value of thinking, reading, and writing critically until the next term or year or even later. When they are taking my class, they only think about how "I was always a good writer in high school" and how "I can't be a C writer." They find 10-20 pages a class "more like a grad class than a freshman comp class.". And most audacious, "the prof should tell me exactly how to write not expect me to come up with it on my own."

There is a huge jump in expectations between high school and college writing and reading in our community, and they take that shift out on the instructor when they should be complaining to their school district for only expecting e minimum from them.
 
Students should be offered the opportunity to evaluate faculty, and they should know that their evaluations will be taken seriously.

Honestly, I don't see how you can expect students to fill out evaluations in a meaningful way if you don't explain to them throughout the semester what kinds of feedback are useful. I would do a short seminar on constructive criticism and have the students fill out evaluations for me and for each other throughout the term so that by the end of the semester, they had a lot of practice giving feedback.

I also did things that had nothing to do with my teaching that raised my scores. I started making a practice of sharing the average scores for the department and the college with my students because I didn't think they understood how their "average" rating was hurting me and the other two instructors in our course (the mean rating for the college was "excellent" – two points above “average”.) I discussed very bluntly with the students that our evaluations from our chair were influenced by student evaluations and that our bonus and pay were determined in part by student evaluations. I started doing midterm teaching evaluations that were for my own use. Even if I did not act on any of the suggestions in those evals, my scores went up on the final evals.

All of this made me very cynical about student evals. I wanted to know what was helping my students learn in the course. But my observation was that my evaluations seemed to have more to do with my student's perception that I "cared" about them than whether or not they actually learned the material. I think this is the key flaw in evaluations – students judge faculty not just on how much they the students learn but on how “hard” the course was or on how they feel about the professor. The student’s goals (to get through the course and get their degree, to work – but not too hard, to get a good grade) and the goals of the faculty and college (meet curricular objectives, meet assessment goals) are in conflict.
 
There's a relevant article in a recent Chronicle of Higher Education: "Students Lie in Course Evaluations."

About 1/3 of students fudge their faculty evaluations. Some give higher ratings to teachers they like, even more give lower ratings to teachers the don't like.

--Philip
 
do administrators also read the online evaluations/ratings systems (e.g. "ratemyprofessor.com") for professors?

I would hope they don't. They are too easy to spoof — there's no check that the person making the rating actually took the course. I've seen several online campaigns to trash someone's rating, and I know of at least two cases where the number of ratings was at least an order of magnitude higher than the number of students enrolled in the professor's course.
 
At my previous institution, a member of our division got a "complaint" along the lines of "She actually expects us to read the book and to _think_!" The division chair called us together and asked why we weren't all getting the same negatives.
 
"That said, I’m often struck at faculty paranoia around student course evaluations. They’re part of the picture, but they’re far from dispositive."

Yeah, right. Tell that to the dean and chair who dragged me in for a 75 minute meeting on the basis of little more than the numeric average of my evaluations being on the wrong side of "very good" by a tenth of a point, neither never having set foot in my classroom in several years' employment. And neither have set foot in there since, despite an "improve or else" directive.
 
As someone who got denied tenure based on the summary of entirely narrative evaluations -- a summary compiled by students who were given no guidance -- it's not surprising that I am still anxious about reading evaluations. Even though now they are extremely positive, it's hard to forget the one that says "She sucks".
 
Funny you mention slow grading.

I have a lot of students and they are writing for me every single class day of the semester: so, three days of class per week and three days of reading and grading. Everything has to be read by the following class period because if I dogged off, I quite literally could not squeeze two days worth of reading into a single day. No one has ever waited more than 48 hours (72 on weekends) for assignment turnaround.

It's a lot of work and a pain in the ass, but I don't think writing students can afford to spend a whole semester polishing a few pieces to perfection. They have to write a lot, and I have to read a lot.

So, the only student evaluation that ever pissed me off was a student checking off the box next to: 'Fails to return papers quickly.'

The student signed his evaluation--and it's true that in his case I failed to return his papers quickly. But that's because he would regularly skip several classes in a row! I could hardly return his frickin papers if he wasn't there!
 
One thing that I think shouldn't be the case: tenured faculty should not be exempt from doing evals, nor should they select a subset of classes. Otherwise either (a) untenured can only be measured against untenured or (b) worse: they're measured against an unrealistic picture of the faculty at large.

Another thing: students need to be told what their evals will be used for. They've spent much of their life filling out responses in all sorts of contexts (not necessarily educational ones), most of which disappeared into a black hole as far as they were concerned.

It's not unreasonable for them to have no clue that this isn't just a piece of pointless bureaucracy that will go into a filing cabinet somewhere and be forgotten. Faculty are paranoid (well, untenured faculty) because they are aware that their career may turn on something to which the students may quite reasonably give very little thought - especially when it's the fifth evaluation that the student has had to fill out that week, and s/he is desperate for the semester just to be over.
 
For the ten thousandth time, if your institution is doing things based on student evals, juke the damn evals!

Why would anyone sacrifice their career in order to keep their grading average .3 lower? Or not to bring in cookies once a week for the three weeks leading up to evals?

You don't have to sacrifice an iota of teaching quality. Juke the evals and if you get a chance to change the culture, do so.
 
If you were attemping to design the worst possible method of assessing teaching effectiveness, it’d be pretty fucken hard to outdo student evaluations. Students haven’t the faintest fucken clue what good teaching is about, they don’t know what they need to learn, and they don’t know how to judge whether they are learning it.

Do armies poll their basic trainees to see if their drill sergeants are going a good job? FUCKE NO!
 
I'm a grad student who puts a lot of thought into what I write on evals, so it's a little dispiriting to see commenters discounting them. I do try to make specific suggestions about readings, assignments, etc., rather than just complaining. (And, usually, my response is that the instructor's great and I learned a lot, but a few tweaks might help.) My school and department keep tinkering with the degree requirements (turning 3-credit courses into 2-credit courses to make room for a new "fundamentals" course, pulling the quantitative analysis from a policy course and turning it into its own 2-credit required course, etc.), so I especially hope the professors and administrators are using evals to gauge whether workloads have been appropriately reduced for the now-2-credit courses, to what extent the new courses cover the same material as the previously existing ones, etc.

To what extent can questions be written to distinguish between students who might give low scores unfairly (e.g., because s/he didn't want to have to work hard for an A) vs. those who have useful feedback to offer? I think it's good that my school asks separate questions about a professor's command of subject matter, preparedness for class, and ability to engage with students; unless a professor is a total wreck, you can expect a student who's being honest and thoughtful to give low marks for one or two of those areas but not all three.

I also appreciate that the form specifically asks whether the professor "starts and dismisses class according to scheduled meeting time"; I had one class in which the professor regularly ran over, and I appreciated the chance to point out this one problem while praising his expertise and engaging classroom manner in other questions.
 
This is what I've seen in a similar but different world (professional training):

Student evals only have one important part - the free-form comment box. The rating category questions are only there to jog someone's memory when they're put on the spot on eval day.

Have a 1-5 rating system, but require any 1s or 5s to be substantiated by a comment. Now you can scan the comments for any legitimate gripes.

The numbers, well they're useful in one way. The teachers who are too easy, everyone's buddy, afraid to grade strictly, will get a perfect 4.0 average. That's because they attract the kind of student who puts no effort into the eval and will just make a line of 4s down the page.

The GOOD instructors would start out at 2s and 3s until the students got used to the expectations and saw that they would be held to them. They would rise to 4s and 5s as those students became proud of not getting a free ride and saw the results of being driven harder.

Unfortunately, most students will not repeat the same instructors that were hard on them, given the choice, so that instructor is just going to keep getting a bunch of fresh 2s and 3s from the new crop.
 
"On a scale from 1 to 5, how often did the TA start tutorials on time?"

The fact is that tutorials start on time EVERY day, so WTF am I supposed to do with a 3.76 average to that question?

"On a scale from 1 to 5, was the distribution of grades reasonable relative to other courses?"

Yo, Admin lackey! A few things to consider: (a) the students don't know what the distribution of marks were in this course; (b) they don't know what the distribution of marks are in other courses; and (c) it's not the TA's responsibility to determine the distribution of marks in any course. That last part's in our contract.

This is what passes for helpful professional feedback these days. (sigh)
 
These are not the correct questions to ask.
 
There should be one field that specifically asks students: What did you take out of this course? What inspired you to look further? What knowledge are you taking with you? What would you have liked to learn additionally in this class? What would you change if you could?

It seems that knowledge acquisition is not part of course evaluations. I am asking these questions at the end of the class in a blackboard discussion board.
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?