Monday, January 31, 2011

Ask the Administrator: Self-Evaluations and Jargon

(Program note: A huge “Thank You!” to all the readers who volunteered info on how their colleges handle grade appeals. There’s quite a bit of variation out there! I appreciate the feedback.)

An occasional correspondent writes:

I filled out a faculty self evaluation yesterday. Of course it was not called that. Rather it had some crazy acronym I don't really recall... maybe a
P-PEAR (Personnel Personal Evaluation Assessment Revue)?

The first question was "describe how you create a student centered learning environment". This question confused me because it seems self-evident that a learning environment is centered around those learning, so I Googled it. There were interesting things about teacher vs. student centered styles, but the fact that I had to google it to understand is concerning to me.

My question is this: Do people that write in doublespeak think everyone understands, or do they speak in doublespeak to obscure? Or perhaps to justify their own existence with magic incantations?

How about some plain English here? They could have gotten at the same thing with this question: "What happens in your classroom on an average day?". Or maybe part of the test is that I can respond with doublespeak to their doublespeak?



Concur in part, dissent in part.

“Describe a typical class” is far too vague, and it doesn’t give any clues as to what you’re actually trying to achieve (or, more darkly, what you’re being evaluated on). That said, something like “student-centered” assumes a level of familiarity with educational theory that may or may not be there.

My guess is that the idea behind the form is to push you in a given direction. Instead of asking how your class went, which could mean anything, it’s asking you what you did to get away from lecture and to have students participate in some meaningful way. That may or may not always be the best goal, but it’s both specific enough and broad enough to work across most disciplines. (I’ve long thought that, say, history should have a longer leash on the ‘no lecturing’ idea than many other fields, just because there’s so much raw material. But that’s another discussion.) If nothing else, it should show whether you’ve given any thought to how you structure your class.

I have to admit being increasingly skeptical of self-evaluations generally. Mediocre performers often rate themselves quite highly; whether that’s obliviousness or a reflection of another sense of how things should be done, I’m not sure. (“My job is to present the material. If they learn, great.”) To the extent that self-evaluations play into performance evaluations that actually matter, they discourage candid reflection and encourage happy talk. To the extent that they’re disregarded, I guess, you defeat the objection from puffery, but then raise the issue of why they exist at all.

In the academic world, we’re frequently torn between assessment or evaluation as formative, and assessment or evaluation as declarative. Is the point of the evaluation to make you a better instructor, or to decide whether you’re good? (Grading often falls victim to a similar confusion.) If it’s the latter, then self-evaluation strikes me as obviously absurd. If it’s the former, then some clarity in the process seems necessary.

Were it up to me, the entire process would look very different. Rather than trying to prescribe methods or, worse, simply taking the professor’s word for it, we’d judge teaching performance by how well the students did in subsequent courses. Alternately, we could separate teaching from grading, and have faculty grade each other’s sections. In both cases, the idea is to introduce some sort of objectivity into the process. If the students you gave A’s couldn’t pass the followup course, then I have some questions about what you’re doing. If the students constantly bitch and moan about you, but they hit it out of the park the following semester, then I assume you’re doing something right. Let the results tell the story.

(This would also defeat the usual objection that faculty could game evaluation systems through grade inflation. If they didn’t control the grades, that would be impossible.)

At PU, I once saw a completely unintelligible survey go out to faculty from Home Office. They had taken a slew of categories from some study they had done, broken them into subunits, and then just thrown them out there raw. The levels of code were such that even the good sports had no idea how to respond. It happens.

If I had to read faculty responses to the question you were asked, I’d be looking for evidence that you’ve given serious thought to how you structure your class. (That is, instead of “I cover chapter 2, and then I cover chapter 3...,” I’d want some sign that you vary your methods as appropriate.) But yes, the question is a bit perplexing as asked.

Good luck.

Wise and worldly readers, have you seen a self-evaluation mechanism that actually made sense? Alternately, what do you think of a separation of teaching from grading?

Have a question? Ask the Administrator at deandad (at) gmail (dot) com.