This one has nothing to do with the election. Promise.
Last year we started a program in which several full-time faculty, mostly tenured, were compensated for being on call for confidential peer-to-peer class observations. The idea was that the formal observations done by deans go in personnel files, and are used for evaluation. Anybody who’s nervous about their standing will probably go with some version of ‘greatest hits’ when the evaluative observation is being done. Almost by definition, that limits the value of the evaluative observation.
But sometimes it’s helpful to get feedback from a colleague that won’t be used against you later. That can be helpful if you’re struggling and want to improve, or if you’re trying something new and risky, or if you suspect you might have slipped into a rut.
I like the concept a lot, but someone asked me a question that I wasn’t sure how to answer. If the confidentiality of the observations is maintained, how do you measure success? In other words, after a year or two, how do you know if the resources spent on the observations are worth it?
I admit struggling with this one.
The basic idea behind the question makes sense. Given limited resources, it’s fair to ask that we do some due diligence to ensure they’re being used wisely. No argument there. But the nature of the project -- don’t tell me who got observed or how it went -- makes deeper assessment tricky.
Obviously, there’s the first-level quantitative indicator: if after a publicity blitz, nobody takes the group up on the offer, then I can assume there’s little appetite for it. But assuming a reasonable rate of uptake, how to tell if the program is working?
“Working” implies a purpose. I see the primary purpose as helping faculty be effective in the classroom, whether by getting back on track or by getting confirmation that something is working. (In practice, I’d imagine there’s plenty of splitting-the-difference: “It was mostly great, but when you did x, it didn’t quite work; maybe try it this way…”) The secondary purpose is validating faculty as professionals by conspicuously respecting their judgment. That one’s harder to measure, but it’s really a by-product of the first. If I could measure the first well, I’d call it good.
We could ask the people observed to fill out evaluations of the feedback, maybe at the end of the semester. Waiting at least a few weeks might help people get past the knee-jerk response to criticism, if that’s what they got, and to focus instead on whether it was actually helpful. There’s some value in that, though it relies a bit too much on self-awareness. Sometimes the most useful feedback isn’t pleasant, at least in the moment.
I’m guessing I can’t be the first person to face a question like this, so I’ll ask my wise and worldly readers for help. Have you seen useful ways to get feedback from smallish numbers of incumbent employees while respecting confidentiality?