Wednesday, March 07, 2007


Candid Evaluations

It's the time of year to rate faculty promotion applications. (“Promotion” here refers to moving from Associate to Full Professor, or in some cases, Assistant to Associate.) Through a series of political compromises worked out over the years, we've developed a relatively complicated series of steps that a given application has to move through. People with input include the applicant, a peer (chosen by the applicant), the department chair, a faculty union committee, the dean, and the VPAA. (Eventually, the President and Board have final say, but in practice they typically go with the VPAA's recommendation.)

We've also developed a relatively clear set of criteria, using fewer categories than in the past and insisting on more detail for each. (For example, we dropped “community service” as a category when we realized that nobody could define it, let alone explain how to judge or even verify it. One professor always wrote “paid my taxes” under community service. I admire the panache, if not the altruism.)

We've had multiple discussions with the faculty union about processes and criteria, so there aren't any surprises. (They were as happy to drop the “community service” category as we were!) We have a very experienced set of department chairs. You'd think this would be easy.


No self-respecting professor would choose a peer who would trash her. So these inputs become largely meaningless by virtue of being uniform. Everybody is perfect.

The department chairs, as a group, see much more harm than good in giving a less-than-glowing evaluation. There's some logic to their position: they have to live with the applicant, and they generally don't see the applications from other departments. As long as an applicant can put at least something in each category, it's tempting to say “ahh, what the hell,” call it good, and kick it upstairs. Multiply that dynamic by a couple of decades, and you get a weird self-defeating process in which broad participation actually results in centralized decision-making, since all that conflict aversion renders the vast majority of the input irrelevant. The VPAA became, by default, the designated bad guy, whose job it was to overturn all those lower-level recommendations.

(Full disclosure: the deans' responses have largely followed the same dynamic as the chairs'. By the time I got here, the process was improbably long, impressively expansive, and ultimately decided by one person.)

We're trying now to introduce some level of candor to the evaluation process. It's an ugly, uphill battle.

We have some basic structural reasons to avoid candor. An unsuccessful promotion applicant remains on the faculty, still with lifetime tenure. Some people have been denied more than once, and all that grudge-nursing can create some very unhappy corners of the college. In the absence of candor, too, an unsuccessful applicant may or may not have a clear sense of the reasons for denial. Tenure abhors a vaccum, though, so in the absence of real reasons, people invent reasons that make sense to them. These often involve ugly accusations of bias, or personal vendettas, or simply misplaced anger.

To make matters worse, we have no merit pay system, other than promotion 'bumps.' In the absence of a consequence for a good or bad review, the path of least resistance is grade inflation. Multiply that by a few decades, and what was born as a failure of nerve has gradually become an inalienable right. It's tough to introduce candor suddenly at the point of promotion, when routine reviews leading up to it have been uniformly glowing (even if only because they all are). It's even tougher to generate candor when the hostility it would engender is palpable and immediate, and the payoff long-term and abstract.

Finally, there's the “you first” problem. A reasonable department chair might calculate, correctly, that if she decides to bite the bullet and go for candor, and her colleagues in other departments don't, then she will generate all of the downside with none of the upside. Better to take the safe way out, and accede to grade inflation.

The faculty union wouldn't stand for merit pay, and we're reasonably sure that in this political climate, a strike would hurt the college more than any concessions we might win on individual issues would help. So we're pretty much caught in the parameters as they exist, perverse incentives and all. Getting smart people to act contrary to palpable incentives in the name of abstract rightness is an uphill battle.

Has your college found a way to do candid evaluations without sacrificing broad input?

At my place, input is nauseatingly broad; every faculty member at the aspired-to rank or above in the entire college (not university, but the still-large subset of the college) casts a vote with supporting narrative. The administrators see these votes; the rest of the faculty do not. These votes are not binding on the VPAA and Dean, but they are influential. There is a faculty committee that interviews the aspirant (and these are not soft-ball interviews, these are very tough "what have you done for me lately" interrogations). The committee casts a vote that is supposed to be a guide, along with the faculty vote, to the VPAA and Dean, but in fact is often determinatory. If their vote goes against you, the committee chair generally comes to visit with a copy of the recommendation for you to look at and a gentle suggestion that maybe it would be better to withdraw from the process before they submit it. It is very common for people to have to take two or three runs at promotion before they get it. In fact, the process was so onerous (documentation could run hundreds of pages) that for a long time we were something like 75% assistant professors, 21% associates and 4% fulls. (I myself was an assistant professor for 18 years until I finally realized that I had earned the associate rank, I was just intimidated. It took two tries, but I succeeded the second time.) These proportions were wildly out of whack with the other colleges at our home university. Some modifications of the process have been made (mostly forcing faculty to serve on promotion committees, since they were, especially at the full professor level, becoming an inbred old-boys-club) and the dean encouraged people to start the process and give it a shot. More people are moving up now, but it's still fraught. It is certainly no slam dunk.
Our faculty is unionized but not everyone belongs to the union. Recently, the union reflected the complaints of some faculty that it was simply too much paperwork to do their annual review so they offered the administration a deal: less paperwork for those going for "average" but more paperwork and FEWER PEOPLE approved for "above average". The union has also so constrained the system that anyone doing anything differently including doing things that actually attract new students - is penalized rather than rewarded for attracting new students. I would frankly prefer a professor's fate to be in the hands of administrators but they currently claim the union ties their hands. . . .
Is service to the university/committee membership counted seperately than "community service?" If so, then I can see the confusion of trying to define community service.

I'm a grad student, but even first years in my dept. know that community service is usually a catch phrase for service to the university/have you been putting in your committee time. As a minority female, one of the first things other minority female profs told us was to limit our time committements to committees and university service as we would constantly be asked to serve both as grad students and as faculty (since in some depts were a limited supply) and that we needed to chose wisely and limit our involvement.

Of course, at my institution students can put in time to university committees, so I did my "community service" and put in time at the "health center advising committee (aka the group that buys health insurance), a often thankless task (which involves explaining over and over again to my collegues how we were limited in our decision, and btw here's how health care works in the U.S.), which someone pointed out, only prepared me to sit on the faculty version of the committee and to endlessly explain how health care works in the U.S. over and over again to my colleges once hired.
Why is "candor" needed if your criteria for promotion are well defined and the standards people need to meet are public and clear? Wouldn't it be pretty obvious if people were meeting the criteria or not? I think this part of the process has to be fixed if people are able to get through without really meeting all expectations.

Another thing that puzzles me...why shouldn't everyone who applies get promoted? If they meet all the criteria and can document that shouldn't that be rewarded? Maybe your department chairs really do think all their people are doing a good job.
The Full professors in my department recently nixed a promotion because the aspirant had a history of missing classes.

It was true-- there was a history of whispers, anyway, but 1) there was no paper trail at all-- to the contrary, there were perfectly good annual reviews without mention of the problem. 2) The aspirant had documented health problems that caused the frequent absences.

"Candor" in the promotion process, when not backed up by candor in the review process, or in the day to day management of the department seems totally screwed up to me-- and probably illegal.

The aspirant doesn't do the job all that well because of illness--I think there should be some sort of way of facing up to this outside the promotion process...
Ivory asks "Why is "candor" needed if your criteria for promotion are well defined and the standards people need to meet are public and clear?"

(I'm the first "Anonymous" above, just so you know where I'm coming from.)

The problem is that the process is very heavily a judgement call in many ways. For example, I was very light on published research (which is not supposed to be a major part of my job in any case, but is not a negligible category either) because instead I spent a great deal of time keeping up with alternate instructional delivery technologies, trying them out in my classroom and doing workshops and presentations in that area instead. The guidelines are silent as to whether my hard work in learning and sharing my expertise in instructional delivery was equal to publishing. My committee decided, after a rather grueling and detailed defense interview, that it was, and they so advised the VPAA and the Dean. If we had sharply delineated expectations and standards, I might not have gotten through.

I don't have a problem with this. Our peer review isn't a love-fest; candidates for promotion prepare for their interviews and the committee members take their responsibilities seriously. Questioning is pointed. There are, of course, guidelines, and most successful candidates have been shot down at least once, and have the committee's recommendation to use to improve for the next run. I took three years between my first and second tries at associate prof, beefing up my accomplishments where the committee deemed them lacking, but doing it in a way that had value for me and my students.

At my school, you can't perform to the guidelines. You have to actually do something. Taking that element of judgement out of the process reduces the process to a series of checkboxes, and that rarely produces excellence in my experience.
Yes, community service was considered in a separate category from college service. That made it very tough to define.

I'll second the observation that judgment is, and must be, a part of the evaluation process. We can't create the equivalent of an automated system by just putting out a checklist. Needs of the college change, people come up with new and interesting ways of making their mark, and on the opposite end, some people get very good at finding loopholes in closed systems.
I'll second the observation that judgment is, and must be, a part of the evaluation process.

True, except that one of the things that bugged me about working as an academic is that I felt my boss had no idea about how good a job I was doing. I could spin them lines about why publishing outputs were x, professional development was y and whatever else, but I think in most other sectors bosses have a clearer idea about how an employee is performing.

Certainly in design work you got to see the work of everyone who was working for you and it was pretty easy to evaluate. In academia it's much more difficult.

In a Dean Dad-ish comment, I'd suggest that the mantra of faculty independence and academic freedom has some of the blame for this. Enclosing the classroom as a privatised/individualised space has benefits (speaking out, moral conscience etc.) but also downsides (realistic quality control).
Needs of the college change, people come up with new and interesting ways of making their mark, and on the opposite end, some people get very good at finding loopholes in closed systems.

This would not lend itsself to well defined promotion criteria. You would end up comparing apples to oranges and frankly, I don't know how you could do that fairly.

I just got a generic thumbs up teaching eval - while it won't hurt me it didn't help either. I would love to get input over time from the same faculty member to really get a good sense as to whether or not I'm getting better as an instructor. The way my department does peer evals makes that impossible.
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?