Sunday, October 01, 2017

Promotion Criteria


If you were designing faculty promotion criteria for a community college from scratch, how would you do it?

Notice that I specified “at a community college.”  That means looking at teaching and college service.  Research is not a requirement here, nor will a strong publication record excuse lackluster teaching.  R1 criteria are different, reflecting a different mission.  Here, I’m looking at teaching-intensive colleges.

I’ve seen several variations up close, and none of them seems quite right.  And I’ll stipulate here that any promotion decisions are always subject to board approval.

One involved listing criteria in the faculty contract, and then having the chief academic officer (in that case, the VPAA) make the call.  It had the advantage of relative clarity, and it obviated any need for faculty to be the bad guy with other faculty.  But it relied on one person, so it was subject to the blind spots and biases of one person.  Even if that one person honestly tried to be fair and objective, those who lost out didn’t always believe it.

Another involved making promotion automatic upon time served.  After x number of years, you got promoted.  Exceptions were made only in extreme cases; the default assumption, nearly always correct, was that once you had the right number of years, you moved up.

It had the advantage of relative transparency, and autopilot had the virtue of taking bias out of the equation.  (More accurately, it made the initial hiring process all the more important.)  But it also drained ranks of meaning and of money.  For the college to afford to promote everyone, it couldn’t spend very much on any of them.  People hit the top of the scale pretty quickly, and it was disappointingly low.  Over time, that led to some widespread crankiness.  

One college that had a merit-pay system gave each professor a set number of points each year, like a grade.  When she accumulated a set number of points, she got promoted.  The idea was that better performers would climb faster, but if you were good enough just to stick around, you’d get there eventually.  The issues there had to do with merit pay generally, and with high performers hitting the ceiling early in their careers.  

Finally, one did promotion by the vote of a large faculty committee.  That has the advantage of reducing the impact of any one person’s blind spots or bias, and from what I’ve seen, faculty are often much harsher on each other than administrators are on them.  (The same often applies when students grade each other.)  But it tends to favor people whose service is conspicuous over those whose service is behind the scenes, and it tends to favor people with impassioned advocates over those without.  Service tends to get weighted more than teaching, presumably because it’s easier to measure and compare across disciplines.

Process matters because criteria are so hard to nail down.  What distinguishes a pretty good teacher from an excellent one in a way that would hold up in court?  Certain kinds of terrible are easy to spot: the teacher who often skips class, for instance, or the one who shows up drunk. (In my career, I have seen both of those.)  But if you want to indicate a higher standard than “good enough to not get fired,” it gets tricky.  Class observations by deans can tell you something, but they’re necessarily only snapshots; I might discern whether someone has rapport with students, but I might not know that she routinely takes two months to return papers.  Peer observations are subject to all manner of social pressure and logrolling; at the places that have used them, they tend to default to “excellent” just because nobody wants to be the jerk.  Student course evaluations can be helpful, though their limitations are well-known.  High “pass” rates can indicate excellent teaching, easy material, and/or easy grading.  High “pass” rates in subsequent courses in a discipline can help in some cases, but many courses aren’t sequential, and even when they are, later pass rates often depend on the subsequent instructor.  Popularity with students can be measured by how quickly sections fill -- the rockstars’ classes always fill first -- but popularity with students doesn’t necessarily (or only) indicate excellence.

I’m making a few assumptions here, of course.  One is that ranks exist.  They don’t have to, and I’ve heard of places that don’t use them.  But over time, that strikes me as demoralizing.  Another way of saying “we don’t have ranks” is “you hit your ceiling on your first day of work.”  The second assumption is that promotion should mean something, and should pay accordingly.  That tends to rule out the “participation trophy” model.  Another is that some of the folks who don’t get promoted will be unhappy about it, and will want -- and deserve -- a reasonable explanation.  A better system would make it easy to explain the difference between a pretty good teacher and an excellent one.  Right now we triangulate several flawed measures, simply for lack of any better ideas.

Anyone have a better idea?  If you could design criteria from scratch, what would they look like?  Alternately, if you could design a process from scratch, what would it look like?