Monday, January 31, 2011
Ask the Administrator: Self-Evaluations and Jargon
An occasional correspondent writes:
I filled out a faculty self evaluation yesterday. Of course it was not called that. Rather it had some crazy acronym I don't really recall... maybe a
P-PEAR (Personnel Personal Evaluation Assessment Revue)?
The first question was "describe how you create a student centered learning environment". This question confused me because it seems self-evident that a learning environment is centered around those learning, so I Googled it. There were interesting things about teacher vs. student centered styles, but the fact that I had to google it to understand is concerning to me.
My question is this: Do people that write in doublespeak think everyone understands, or do they speak in doublespeak to obscure? Or perhaps to justify their own existence with magic incantations?
How about some plain English here? They could have gotten at the same thing with this question: "What happens in your classroom on an average day?". Or maybe part of the test is that I can respond with doublespeak to their doublespeak?
Concur in part, dissent in part.
“Describe a typical class” is far too vague, and it doesn’t give any clues as to what you’re actually trying to achieve (or, more darkly, what you’re being evaluated on). That said, something like “student-centered” assumes a level of familiarity with educational theory that may or may not be there.
My guess is that the idea behind the form is to push you in a given direction. Instead of asking how your class went, which could mean anything, it’s asking you what you did to get away from lecture and to have students participate in some meaningful way. That may or may not always be the best goal, but it’s both specific enough and broad enough to work across most disciplines. (I’ve long thought that, say, history should have a longer leash on the ‘no lecturing’ idea than many other fields, just because there’s so much raw material. But that’s another discussion.) If nothing else, it should show whether you’ve given any thought to how you structure your class.
I have to admit being increasingly skeptical of self-evaluations generally. Mediocre performers often rate themselves quite highly; whether that’s obliviousness or a reflection of another sense of how things should be done, I’m not sure. (“My job is to present the material. If they learn, great.”) To the extent that self-evaluations play into performance evaluations that actually matter, they discourage candid reflection and encourage happy talk. To the extent that they’re disregarded, I guess, you defeat the objection from puffery, but then raise the issue of why they exist at all.
In the academic world, we’re frequently torn between assessment or evaluation as formative, and assessment or evaluation as declarative. Is the point of the evaluation to make you a better instructor, or to decide whether you’re good? (Grading often falls victim to a similar confusion.) If it’s the latter, then self-evaluation strikes me as obviously absurd. If it’s the former, then some clarity in the process seems necessary.
Were it up to me, the entire process would look very different. Rather than trying to prescribe methods or, worse, simply taking the professor’s word for it, we’d judge teaching performance by how well the students did in subsequent courses. Alternately, we could separate teaching from grading, and have faculty grade each other’s sections. In both cases, the idea is to introduce some sort of objectivity into the process. If the students you gave A’s couldn’t pass the followup course, then I have some questions about what you’re doing. If the students constantly bitch and moan about you, but they hit it out of the park the following semester, then I assume you’re doing something right. Let the results tell the story.
(This would also defeat the usual objection that faculty could game evaluation systems through grade inflation. If they didn’t control the grades, that would be impossible.)
At PU, I once saw a completely unintelligible survey go out to faculty from Home Office. They had taken a slew of categories from some study they had done, broken them into subunits, and then just thrown them out there raw. The levels of code were such that even the good sports had no idea how to respond. It happens.
If I had to read faculty responses to the question you were asked, I’d be looking for evidence that you’ve given serious thought to how you structure your class. (That is, instead of “I cover chapter 2, and then I cover chapter 3...,” I’d want some sign that you vary your methods as appropriate.) But yes, the question is a bit perplexing as asked.
Wise and worldly readers, have you seen a self-evaluation mechanism that actually made sense? Alternately, what do you think of a separation of teaching from grading?
Have a question? Ask the Administrator at deandad (at) gmail (dot) com.
We only have tenure-track faculty write self-evaluations. One of the things I look for is whether or not the faculty member is sufficiently reflective on how much the students have learned, relative to how much they have been taught. I would urge your correspondent to make sure that he is able to do this. If he is not up on the jargon, have him/her look up the more applicable term of "learning centered", as opposed to "student centered".
A couple of very good things happen with this kind of assessment: 1. Course content becomes more standardized. It's immediately apparent if someone is teaching poetry in a comp course, but the standardization isn't absolutely rigid. 2. Classroom chemistry improves. Instructors can say to students, "Your final papers are going to be judged by a group of very picky English teachers. They--not me--will decide whether you pass or fail." The role of classroom instructor changes from judge to editor or (don't snicker) coach.
The downside to portfolio assessment is workload. It's the end of the semester, and cc comp teachers, many with a 5/5 workload, are already neck-deep in student papers. Now here's still another huge stack of student writing.
Obviously, portfolio assessments can be used in other disciplines, but these workload issues usually outweigh the benefits of portfolio assessment.
And of course someone might have a very "student centered" class environment without ever having set foot in an education theory class.
This is setting aside a question of whether the current "student centered" hype makes sense across the board. As DD notes, it probably lends itself better to some disciplines and some courses, than to others. And hopefully, if "student centered" is such a desirable goal that it merits ranking as the very first question on the self evaluation, then perhaps the school in question also needs to do a better job of explaining what it means to their faculty.
I am hoping to learn from others how to make these self-evaluations meaningful. I truly want them to be. But suspecting that they are constantly over-inflated, I have taken to ripping myself to shreds in mine. I have been truly brutal to myself. Two years ago I called myself a failure. This past year, I declared myself incompetent. Really. I am trying to see whether these tools are actually used in decision-making. I'm betting they won't be.
I'm at a small liberal arts college, am well-published, have excellent student evaluations, sent several grads to hoity-toity schools that denied me admission once upon a time, and an average GPA in my classes of a C. Yet I can't bring myself to think that any of these measures build a plausible case that I am any good at what I do.
Is this what it is like to have a mid-life crisis?
I don't understand people with job security (no tenure here at WayUpNorthCC, you'll be glad to know, dd--but we do have union seniority rules) who hesitate to examine their professional practice with scrupulosity, courage, and the resolve to improve.
Al, don't be so easily alarmed. There is this thing called 'irony' which dd's correspondent might have been deploying.
When I wanted a real self eval, I found someone in a related discipline outside my department and had them sit in on my class. I explained to them what my objectives were and she reported back to me whether or not I had met them. I kept a journal for the course and noted what worked and what didn't and why. But I never submitted any of those things in an official dossier or anything else that could have been used against me in our highly contentious tenure process.
Coming from the sciences, I always found questions like this irritating. Lecture may be evil and wrong but the reality is that some disciplines lend themselves to practice and self-study (music, math, science). Why we ignore this fact is mysterious to me (but perhaps rooted in the fact that social scientists are the ones that write articles about teaching?) We base a lot of our educational theory on K-12 research and I wonder if we wouldn't do better to expect more from our adult students than we do from children.
The problem we in the sciences have with either bit of lingo is that the actual goal appears to be Passing Centered if you ONLY look at one particular desired outcome: increased retention. All faculty know how to meet that one if no one is measuring anything else.
Like your allusion to history, scientists tend to look at Learning Centered as knowledge centered -- some specific set of skills and facts that students will be able to apply in the NEXT semester's class. It does little good in the long run to get lots of kids through calculus 1 and physics 1 if they fail calculus 2 and/or their engineering classes.
On the issue of self evaluation, the smart thing to do is save the criticism for yourself, in private. I've also done an informal peer evaluation with a colleague (sitting in on each other's classes) and found that to be the most productive of all.