Tuesday, January 19, 2016


Teaching Possibilities

I'm hoping to steal shamelessly from some wise and worldly readers at other places.  

Too many colleges treat faculty as a cost, rather than an asset.  Professional development is often reduced to travel or webinars, and then cut when things get tight, which they nearly always do.

Conferences can be great; I've certainly learned a lot at them.  But they're expensive, and we have nowhere near the money to send everyone to them.  Multiply $1,500 by several hundred faculty and staff, and it adds up.  And the "send everyone to conferences" model suggests that every good idea comes from outside, which simply isn't true.

We have people with some terrific ideas right here.  

That's especially true when it comes to teaching, as opposed to current developments within particular disciplines.  

So we're trying something new, and I'm hoping that some folks who have done something similar will have useful tips.

We've assembled a group of respected faculty across disciplines -- they've chosen the name "Teaching Possibilities" -- to be on call to provide confidential, non-evaluative peer observations and feedback to any faculty, full-time or adjunct, who request them.  

The observations have a few ground rules: they have to be done by people outside the home discipline of the observed, to ensure fresh eyes.  They will not be reported back to the administration in any form other than a raw count (i.e. "this semester we did fourteen").  They will not discuss the observations with anyone outside the group.  And the goal of the observations is improvement.

My theory is that standard evaluations -- of the sort that go into personnel files -- are useful as a sort of quality control, but not generally ideal for improvement.  These are an attempt to fill that gap.

So my question to wise and worldly readers: any tips in our first semester?  Any fears?  If you teach and you had this option, would you take it?

We've been stepping up peer observations at our secondary school, using a simple form with a place for a focus question from the teacher being observed and two main spaces for notes: "I notice..." notes (here's what I see) and "I wonder..." notes (what would happen if you did X, why do you do Y). Sometimes the impulse comes from the observed teacher, who wants feedback on a particular class, activity, or issue; often, though, it comes from the observer, who wants to learn by watching a colleague.

Right now we're asking people to do two peer observations during the year, and we haven't required any reporting beyond a debriefing conversation between the participants. Within our department, though, we have had conversations in which people said what they'd like to learn more about and colleagues have suggested which teachers (in and out of our department) they should observe. That was a nice conversation, calling one another out for things we admire.

I like peer observations. They can be a great way for observed teachers to get a friendly, supportive perspective on what's happening in the classroom, and for observers to get fresh ideas and perspective on their own teaching. It's just hard to get the culture of it going--people are busy and they build it up in their minds to be this formidable thing and keep putting it off.
You might also want to look at Small Group Instructional Diagnosis:

I really hate to say this, but that was the flavor of the day for part of one school year at my college, maybe 5 to 10 years ago. I want to say that it never took hold, but it would be more accurate to say that it didn't really get started because there was no ground work like you have already done.

I swapped observations with a colleague in a different field, an experience we both learned something from and should probably repeat, and have also observed a few innovative classroom approaches in my field and others. There are several approaches where it just isn't enough to read about it. You need to see it in action to appreciate that it can work. As in your description, this was totally off the books in the sense that the feedback was only shared with the person being observed and there was no report to the Dean. I'm not even sure if we each reported it as "service" in our annual reports. And in other cases it was just done for my own benefit, to see how something is done.

But there was never a central repository of resource people for various types of classroom activities. I think it was more along the lines of "who wants to do this" as just one of several things brought up at a fall convocation, and hardly anyone stepped up because no one had even heard the idea until that morning.
We've done something similar here for a while now.

It seems to be a great tool to either generally improve teaching or to tackle a persistent challenge.

Friends who have done it have told me it was very helpful.
I think that peer reviews of teaching techniques and effectiveness can be very useful. It would be nice if I could observe the teaching of other faculty members so that I might pick up some new ideas that I could incorporate in my own classes. In addition, I would appreciate hearing feedback from other faculty members so that I could improve my own teaching.

But the success of such a program depends critically on the amount of trust that exists between the faculty and the administration. There must not even be the remotest suspicion that these peer reviews could somehow become part of the administration’s faculty performance review process, that the administration couldn’t tap into the data. This is especially important if there is retrenchment and downsizing going on. In such an environment, I would be reluctant to give a fellow faculty member anything other than a sterling review, in which I thought they literally walked on water in the classroom, if I feared that I could be putting their career at risk.

While I was working at Telecommunications Giant, our management attempted to introduce a similar sort of peer review process, in which we would be required to give feedback to our peers. Many of us felt very threatened by this idea, especially since there were layoffs going on. We feared that these peer reviews could be used by management as yet another means to justify the cutting of staff. Because of the bad feeling and the general atmosphere of distrust between staff and management, management eventually abandoned the idea.

I recently attended a one-day Great Teachers Seminar hosted by my institution. The costs to the institution were minimal: breakfast, lunch, and a facilitator. The event was hosted in-house and held on a work day, so there were no location or training costs. Each division or program nominated one faculty member to attend. We spent the full day discussing teaching strategies and innovations. The diversity of attendees was absolutely amazing. As an English instructor, I was able to offer insight about teaching writing for the faculty in our program areas (think nondestructive examination and police training). In return, I learned about the programs offered by our college and made wonderful contacts for future collaborative projects.

I highly recommend this model.

(And the peer review observations sound wonderful as well. If such an opportunity were available here, I'd participate.)
That sounds like a cool idea! I would be interested, (as others have said) if I could trust that the evaluations wouldn't somehow end up "on my record". One approach that comes to mind is this: Have the visiting prof write the evaluation on a piece of paper while they are watching the class (or afterwards? perhaps during a follow-up meeting if there is one?) and give the piece of paper to the person being evaluated without making any copies of it.

Of course if a person is really paranoid, they could suspect copies are being made secretly, but if someone is sufficiently paranoid then nothing will convince them. But I assume any electronic document that passes through the university is university property (I think that officially my email belongs to the university for example, and they can look at anything that is in there), so I think suspicion of any kind of electronic reports is not just paranoia. You could try to promise in writing that you will never peek, but I've learned through my experience with insurance companies that the mostly seemingly airtight written promises can turn out to be full of loopholes...
Have the faculty and evaluator meet ahead of time to talk about the evaluation instrument and anything that the faculty member is interesting in having the evaluator focus on during their observation.

I would choose an evaluative instrument that is more like a rubric than a performance eval. If they get a "5" or a "1" on something, they should know why. This allows some norming between different members of the team.

I had a nursing faculty member eval my hematology class once with mixed results. She expected me to have more group work and classroom participation. I think she didn't understand the content and found the course boring. This limited the usefulness of her feedback. I know you are trying to get cross pollenation but it might make sense to try to pair up people who have enough info to understand what is being taught in the course they observe.
ArtMathProf @5:50AM -

The way we did it, there were no data and nothing was shared with the administration. It was none of their business, since the only thing they see are the results in the classroom. As you note, it can never work if the information is not completely private between the two people involved.
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?