Sunday, November 13, 2016

 

Measuring Success Confidentially


This one has nothing to do with the election.  Promise.

Last year we started a program in which several full-time faculty, mostly tenured, were compensated for being on call for confidential peer-to-peer class observations.  The idea was that the formal observations done by deans go in personnel files, and are used for evaluation.  Anybody who’s nervous about their standing will probably go with some version of ‘greatest hits’ when the evaluative observation is being done.  Almost by definition, that limits the value of the evaluative observation.

But sometimes it’s helpful to get feedback from a colleague that won’t be used against you later.  That can be helpful if you’re struggling and want to improve, or if you’re trying something new and risky, or if you suspect you might have slipped into a rut.  

I like the concept a lot, but someone asked me a question that I wasn’t sure how to answer.  If the confidentiality of the observations is maintained, how do you measure success?  In other words, after a year or two, how do you know if the resources spent on the observations are worth it?

I admit struggling with this one.

The basic idea behind the question makes sense.  Given limited resources, it’s fair to ask that we do some due diligence to ensure they’re being used wisely.  No argument there.  But the nature of the project -- don’t tell me who got observed or how it went -- makes deeper assessment tricky.

Obviously, there’s the first-level quantitative indicator: if after a publicity blitz, nobody takes the group up on the offer, then I can assume there’s little appetite for it.  But assuming a reasonable rate of uptake, how to tell if the program is working?

“Working” implies a purpose.  I see the primary purpose as helping faculty be effective in the classroom, whether by getting back on track or by getting confirmation that something is working.  (In practice, I’d imagine there’s plenty of splitting-the-difference: “It was mostly great, but when you did x, it didn’t quite work; maybe try it this way…”)  The secondary purpose is validating faculty as professionals by conspicuously respecting their judgment.  That one’s harder to measure, but it’s really a by-product of the first.  If I could measure the first well, I’d call it good.

We could ask the people observed to fill out evaluations of the feedback, maybe at the end of the semester.  Waiting at least a few weeks might help people get past the knee-jerk response to criticism, if that’s what they got, and to focus instead on whether it was actually helpful.  There’s some value in that, though it relies a bit too much on self-awareness.  Sometimes the most useful feedback isn’t pleasant, at least in the moment.


I’m guessing I can’t be the first person to face a question like this, so I’ll ask my wise and worldly readers for help.  Have you seen useful ways to get feedback from smallish numbers of incumbent employees while respecting confidentiality?

Comments:
One way to get some feedback is for the people doing the initial classroom visits re-visit in (say) a year, and have them do a follow-up assessment of whether the faculty being observed have (a) made changes that (b) incorporate some part of the feedback they got. In this way, the faculty receiving the assistance don't have to be identified at all, but you still get some feedback on whether the observers think things have improved. (Of course, you then have to be able to trust the observers, which gets sort of meta pretty quickly.)

An alternative is an annual survey of faculty about teaching development activities in general (including peer reviews), also anonymous. Actually the survey of teaching development activities should probably be done in any event. So add the peer-review piece to it.
 
My first reaction was what Don said in his second paragraph. Our faculty are expected to do some teaching development activities, which we report each year, and we are also tasked with continuous quality improvement on the learning outcomes we assess. Just saying "I made use of the peer-to-peer review process to get some insight into how those new methods were working" would be enough in that context. Faculty who are actually comfortable with the working environment at your college would probably say more, perhaps even praising the peer reviewer by name.

On your end, Don's first paragraph touches on what I think would be the minimum amount of info they should report to you each year. The number of visits and/or consultations, for example, and the number of revisits or followups.
 
SamChevre says:

One idea that I have seen in a corporate context (where I work) is something like the following: (We use something like this for counseling on personal issues, which is free to the employees).

Ask the evaluators to report, every semester, a couple of very simple metrics:
1) How many evaluations did you do?
2) Very roughly, how many were about what? (Your categories above are general enough to work).

With that, you can at least be sure that work is being done--it's not identifiable to professors or changes, but it may help. It also helps if you want to add an evaluator--if half your evaluations are on "am I using {technological resource X} effectively" that may influence who you want to choose as evaluators.
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?