Actual conversation I had with a student in 2001, shortly after the 9/11 attacks:
Student: We should bomb ‘em back to the stone age. That’ll teach ‘em!
Me: Hmm. When we got attacked, how did we respond?
Student: Screw ‘em!
Me: Exactly. And why wouldn’t they respond that way to us?
The student was dumbstruck by the prospect that the people on the other side of his proposal were three-dimensional beings, with the same emotional range he had. He simply hadn’t thought of it.
I was reminded of that exchange in seeing the new CCRC report on the unintended impacts of performance funding on public higher education. It suggests that performance funding models often elicit institutional or employee behavior different from that intended by the authors of the models. In other words, three-dimensional people on the receiving end of policies will act in their own perceived self-interest, within the confines of the options they perceive.
This shouldn’t be shocking. In fact, I predicted several of the outcomes the CCRC paper notes back in 2012 (the link is here). It wasn’t difficult; all I had to do was to imagine how statewide mandates would play out locally. If you take seriously the idea that people on the receiving end of policies will respond to incentives -- whether intended or not -- then it should not be surprising to discover that some of them gamed the system. The system rewarded gaming.
The easy case of gaming is grade inflation. In the very short term, it’s possible to increase pass rates simply by, well, increasing pass rates. That can be done directly, as in the public school districts that responded to NCLB testing by having teachers change answers. But it’s most often done indirectly, through dropping not-very-subtle hints to vulnerable faculty that they don’t want to fail too many people. That kind of word travels fast. Over the long term, it’s corrosive to the academic mission. In the short term, though, it can make numbers look better.
But gaming doesn’t even have to be as sinister as that. A new curriculum takes a solid year to develop, if not more. Once it’s finally running, the effects on graduation rates don’t show up for a few years. In the meantime, the institution is struggling to meet fixed costs in the face of mercurial annual changes in funding. When “performance” is measured annually, a one-year statistical blip can have real financial consequences. In a context like that, a quick fix can look much more practical than a sustainable long-term change with a longer incubation period. Over time, those quick fixes play out logics of their own.
The CCRC paper makes some smart recommendations toward the end about ways to engineer performance funding to prevent gamesmanship. Among other things -- and I can’t agree with these enough -- it recommends paying for improved data analysis capacity on campuses, and for greater IT support. Those may sound wonky, but they matter, and they’re both the kind of “pay now, earn rewards later” expenses that are easy to sacrifice in the face of short-term imperatives. I’d also echo the call for basing performance measures on a college’s own past, rather than on a zero-sum battle with its counterparts; otherwise, you’ll punish the kinds of collaboration that lead to sustainable improvement. To the extent that moving away from zero-sum is considered politically impossible, I’d suggest you’ve discovered something fundamental about the motives behind it.
At a more basic level, though, any serious attempts at improvement have to recognize that actors will respond to the incentives that are relevant to them. As Madison noted so long ago, if men were angels, no government would be necessary. But they aren’t, so it is. A system that only works if everyone puts aside their own self-interest is doomed to fail. If you’re serious about measuring performance, you have to remember the creativity of performers. The lesson they learn from your policy may not be the lesson you had in mind.