Tuesday, December 11, 2018

Why Student Course Evaluations Survive


(Hat-tip to Kim Weeden for raising the question on Twitter.)

Why do colleges still have students do course evaluations?  Is it because administrators are knuckle-dragging mouth-breathers who don’t know that students’ evaluations of faculty are biased?

Probably sometimes.  But even those of us who walk upright and know about bias generally accept their existence.  Why would we do such a thing?

Mostly for lack of alternatives.  

Are peer evaluations free of bias?  Of course not -- any one-on-one observation is vulnerable to the baggage the viewer brings.  

Are administrative evaluations free of bias?  The same principle applies.

What about pre-tests and post-tests?  First, good luck getting students to take both of those seriously.  Second, we all know the biases inherent in high-stakes testing. Third, not every course lends itself to such easily quantifiable information.

Self-reports?  Puh-leeze. Here’s a sentence I have literally never seen in a self-report: “I’m not very good at my job.”  (The closest exception was also the single best self-report I’ve ever read. She structured it as a bildungsroman, complete with a metaphorical subplot about a colleague having a baby.  It was glorious. But it was very much an exception.)

Student performance in subsequent courses?  There can be some merit to that, but not every course is part of a sequence.  In some settings, too, survivor bias can be a major issue. It also assumes that instructor effects in subsequent courses are roughly equal.  Why they’d be equal in subsequent courses, but not in initial ones, is not obvious. In small programs, the follow-on course may be taught by the same professor as the initial one, raising a potential conflict of interest.

Student evaluations can offer insights that other kinds of evaluations can’t.  If I sit in on a class for a day, I may get a pretty good sense of how the instructor interacts with the students, the classroom climate, clarity of presentation, and the like.  I probably won’t get a sense of how quickly or slowly the professor returns papers, how the course unfolds over time, or whether -- and I have actually dealt with this -- the professor simply skips class every few weeks.  Students are uniquely situated to see things like that.

For me, the key isn’t so much what’s on the evaluations as how they’re read.  If they’re taken in strict numerical rank order as the sole guide to quality, then yes, that’s malpractice.  But if they’re taken as one input among many, suitable for certain kinds of information, they have value. Certain types of comments can be safely disregarded: “this isn’t the only class I take!”  “too much reading!” “hard grader!” Those are typically signs that the professor is doing her job. Comments showing gender, racial, or other bias in the students are signs that the evaluations should be ignored entirely.  But comments like “she’s great when she shows up,” if repeated by a significant number of students, raise a legitimate red flag.

At a previous college, I used to get a printout of the full-time faculty’s ratings in rank order.  I ignored the top 97% or so. When the same names kept appearing in the bottom 3%, I paid them extra attention.  In one memorable case, I asked the dean to do an extra observation of someone who brought up the rear over and over again.  She returned, shocked at what she had seen; apparently, the students were being kind. We developed an improvement plan that prompted a retirement.  His successor was a dramatic improvement, and the students responded accordingly.

I wouldn’t use student evaluations to distinguish the very good from the good; there’s far too much noise in there for that.  But when you consistently have the same couple of names scoring a standard deviation below the next-lowest, it’s reasonable to look at them a little more closely.  

Yes, course evaluations are imperfect tools.  They need to be triangulated with a host of other information.  But if we throw them out entirely, we’d lose some relevant information that we might not find out any other way.  It comes down to readers. If you have knuckle-dragging, mouth-breathing administrators, student course evaluations aren’t your real problem anyway.