Evaluating faculty is one of the most important parts of my job, yet some of the most basic information needed to do it right isn’t available.
We have student evaluations, of course, and formal observations by a peer, the chair, and the dean. The rest is mostly self-generated by the professor.
I’ve never seen a peer observation that was anything less than glowing. While I understand the impulse, the sheer abundance of superlatives renders them quite useless as evaluative tools. It falls victim to the ‘you first’ problem – the first honest peer evaluation would expose both the observer and the observed to all manner of awkwardness. Any ideas out there on how to make peer observations meaningful? Ideologically, I like the concept, but the execution just hasn’t been helpful.
Among the info I don’t have, though I had at my previous college, are student grades and course attrition rates.
From asking around, it sounds like student grades stopped being considered about ten years ago. I don’t know if it was at the behest of the faculty, or just as a byproduct of some long-forgotten IT change, but it’s the way it is. When I’ve suggested gathering that information, I’ve received the “what planet are you from?” look. But it’s important, and not just in an evil way.
At my previous college, grades and drop rates were reported each term in (relatively) easily digested form. It wasn’t that hard to spot patterns, which gave a context for student evaluations. Some professors graded hard but got student respect; I knew they were the real deal. Some graded generously and got student respect, and some graded hard and generated student antipathy; in those cases, I relied more on observations. And, memorably, some graded generously but still generated student antipathy. They couldn’t buy love. They were, uniformly, train wrecks.
Then you have the cult favorites – hard grades, glowing evaluations, but only about half of the students make it to the end of the course. Again, without numbers, it’s hard to tell.
It was useful to have that information, since all but the most egregiously incompetent could usually pull it together long enough for a decent observation, and, in the absence of context, low student evaluations could always be explained by (hypothetical) high standards.
I’d love to develop a way to do a speedy-but-thorough content analysis on the written comments on the back of student evaluations. Some of them are self-refuting (The prof is a mean ass dude how come he dint give me an a what an asshole this school suxxx). Some are revealing of serious issues (chronic instructor lateness particularly brings out student snarkiness, and I have to say, I don’t blame them). Some are unintentionally funny (one of mine from several years ago – “Now I write more clearer.” Uh, thanks.) But most are fairly vague and positive, and therefore not terribly useful. When you’re plowing through thousands of them, it would be helpful to have some way to separate the banal from the revealing.
Measurement issues again. This is becoming a theme. Or is it a motif? Sigh.