Tuesday, November 18, 2014


Other People Strike Again

Actual conversation I had with a student in 2001, shortly after the 9/11 attacks:

Student: We should bomb ‘em back to the stone age.  That’ll teach ‘em!

Me: Hmm.  When we got attacked, how did we respond?

Student: Screw ‘em!

Me: Exactly.  And why wouldn’t they respond that way to us?

The student was dumbstruck by the prospect that the people on the other side of his proposal were three-dimensional beings, with the same emotional range he had.  He simply hadn’t thought of it.

I was reminded of that exchange in seeing the new CCRC report on the unintended impacts of performance funding on public higher education.  It suggests that performance funding models often elicit institutional or employee behavior different from that intended by the authors of the models.  In other words, three-dimensional people on the receiving end of policies will act in their own perceived self-interest, within the confines of the options they perceive.

This shouldn’t be shocking.  In fact, I predicted several of the outcomes the CCRC paper notes back in 2012 (the link is here).  It wasn’t difficult; all I had to do was to imagine how statewide mandates would play out locally.  If you take seriously the idea that people on the receiving end of policies will respond to incentives -- whether intended or not -- then it should not be surprising to discover that some of them gamed the system.  The system rewarded gaming.

The easy case of gaming is grade inflation.  In the very short term, it’s possible to increase pass rates simply by, well, increasing pass rates.  That can be done directly, as in the public school districts that responded to NCLB testing by having teachers change answers.  But it’s most often done indirectly, through dropping not-very-subtle hints to vulnerable faculty that they don’t want to fail too many people.  That kind of word travels fast. Over the long term, it’s corrosive to the academic mission.  In the short term, though, it can make numbers look better.

But gaming doesn’t even have to be as sinister as that.  A new curriculum takes a solid year to develop, if not more.  Once it’s finally running, the effects on graduation rates don’t show up for a few years.  In the meantime, the institution is struggling to meet fixed costs in the face of mercurial annual changes in funding.  When “performance” is measured annually, a one-year statistical blip can have real financial consequences.  In a context like that, a quick fix can look much more practical than a sustainable long-term change with a longer incubation period.  Over time, those quick fixes play out logics of their own.

The CCRC paper makes some smart recommendations toward the end about ways to engineer performance funding to prevent gamesmanship.  Among other things -- and I can’t agree with these enough -- it recommends paying for improved data analysis capacity on campuses, and for greater IT support.  Those may sound wonky, but they matter, and they’re both the kind of “pay now, earn rewards later” expenses that are easy to sacrifice in the face of short-term imperatives.  I’d also echo the call for basing performance measures on a college’s own past, rather than on a zero-sum battle with its counterparts; otherwise, you’ll punish the kinds of collaboration that lead to sustainable improvement.  To the extent that moving away from zero-sum is considered politically impossible, I’d suggest you’ve discovered something fundamental about the motives behind it.

At a more basic level, though, any serious attempts at improvement have to recognize that actors will respond to the incentives that are relevant to them.  As Madison noted so long ago, if men were angels, no government would be necessary.  But they aren’t, so it is.  A system that only works if everyone puts aside their own self-interest is doomed to fail.  If you’re serious about measuring performance, you have to remember the creativity of performers.  The lesson they learn from your policy may not be the lesson you had in mind.

I can't remember the source of my favorite parable about perverse incentives and the details may be wrong, but it goes like this:

A shoe-store chain wanted to increase sales of the "leather protector" sprays, which significantly increased the profit margin on a given shoe sale when tacked on.

So they decided to make raises for the sales clerks contingent on selling a certain quantity of this spray.

Whoever made this policy, however, neglected to take into account that the clerks had the power to mark down the pricing of the shoes for various legitimate reasons.

Some enterprising clerks therefore started offering a $15 discount on the shoes for customers who bought at $15 can of spray.

Sales of the spray went way up, but sales of the spray turned out not to be exactly the important metric after all...

In the present environment, there are a whole bunch of perverse incentives out there. They come from several sources, including the outcomes assessment movement, the US News rating system, and now the state-based performance funding system. For the for-profits, there are the pending “gainful employment” regulations. Perhaps in the future there will also be the federal ratings system proposed by the Obama administration.

These will cause colleges and universities to forget their mission of educating the next generation of citizens, and to concentrate instead on meeting a set of numbers imposed by some sort of external authority. If I don’t meet my numbers, I could get into some serious trouble. My institution could lose its funding, my school could lose its accreditation, and my school could drop in the ratings. Or perhaps I could even lose my job. So my main task will be to make sure I “meet my numbers”, and I will certainly make sure that I do, by hook or by crook.

Untenured faculty, as well as off-tenure track faculty and adjunct part-timers are especially vulnerable in such an environment. We had better not fail too many students or give too many low grades, lest the school miss its graduation numbers or drive too many students away, adversely affecting the bottom line of the school. We had better make sure that our “outcomes assessment” numbers look good, lest we get called on the carpet by the administration, and perhaps even lose our jobs.

Consequently, there is every reason for the faculty to avoid reporting “bad numbers” to the administration. There is every temptation for us to fake our numbers or simply make up the results, just to keep us out of trouble. And the administration has every motive to send in fake numbers, just to avoid trouble with the accreditors or with the state funding agencies, and to avoid sliding downward in the ratings. So the system will not look unlike that of the old Soviet Union, in which everyone lies to everyone else all the way up the chain, leading to top Party officials making decisions based on bad data.

An example of this was told by a friend of mine, who worked for a manufacturing company. Management wanted to reduce the number of defects, so they set up a rewards and punishment system based on the reported number of defects. My friend’s unit kept getting beaten up by management for having too many defects. It turned out that they were the only ones reporting honest numbers—all of the others were sending in faked statistics.

Did you see the news story today about people using a bogus 3rd party seller price on Amazon to game Walmart's match your price guarantee? They were getting gaming systems at 75% off.

Not what they intended.
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?