My state is starting to make noises about basing appropriations for public colleges on “performance” numbers. Since it’s fairly clear that we’re talking about reallocating existing money, rather than adding new money to the pot, some campuses stand to get more, and others stand to get less.
Naturally, this has led to some pretty animated discussion about which metrics to use. There’s real money at stake.
Performance funding sounds great until you try to do it. In theory, I’m perfectly comfortable with the idea that some colleges do a better job than others, even within the same sector. And I’m fine with the idea that taxpayers deserve some assurance that public employees are delivering a decent bang for the buck. My performance as an employee is evaluated (and quantified) annually, so it’s not like I’m unfamiliar with the concept. But applied to a college as a whole, things get murky.
Simply put, what should count?
Graduation rates seem like an obvious place to start, but they aren’t, really. The Federal IPEDS definition only includes first-time, full-time students who claim they are seeking degrees. Returning students, part-time students, reverse transfers, and lateral transfers don’t count. Students who take “too long” count as dropouts, even if they subsequently graduate. Students who take a year and then transfer to a four-year college count as dropouts, even if they complete the four-year degree on time. And of course, a college’s graduation rate has a great deal to do with the students it attracts. All else being equal, a college dropped in the middle of a slum will have a lower graduation rate than one in an affluent suburb. Community colleges in states with weak four-year sectors have higher graduation rates than community colleges in states with robust four-year sectors, since in the former case, local high achievers have fewer options. To attribute the difference to college “performance” becomes self-fulfilling.
Some formulae give ‘premiums’ for students from underrepresented groups, STEM majors, or other cohorts that the state wants to encourage. The idea is to incentivize colleges to do what they can to reach broader social goals.
This strikes me as more promising than a simple graduation rate, but still quite difficult to get right. Students choose majors; colleges don’t assign them. If the majors weren’t distributed in a way that resulted in a positive funding outcome, a rational college would redistribute its own internal funding to try to change that. Frustrate and turn away enough humanities majors, and your STEM percentage increases by default.
We could take a page from sabermetrics, and try to look at “value added.” This was the approach taken in Academically Adrift, and it formed the basis of the claim that roughly half of college students don’t improve their critical thinking skills in the first two years of college.
But I literally can’t imagine the selective institutions going along with that. If many of their students arrive already competent, then it’s hard to add much value. I suspect they’d kill this initiative in the crib.
We could take a page from Achieving the Dream, and use milestones to completion as the relevant measures: completion of developmental courses, completion of 15 credits, etc. Again, there’s some appeal to this, but it doesn’t control for different demographics. And under a desperate or clueless local administration, it could easily result in not-subtle pressures to just pass students along, regardless of performance.
And the entire enterprise seems a bit silly when you compare it to other public services, like, say, firefighting. Should more effective fire departments get more funding than less effective fire departments? Or would that just make the less effective ones even worse? And how would we define “effective,” anyway? “There’s been a wave of arsonists in the city. Clearly, the fire department is loafing on the job. Let’s cut their funding!” Um...
Sometimes, poor performance can be a product of a lack of funding. When that’s the case, basing funding on performance ensures a death spiral. Which I sometimes think is the point.
Wise and worldly readers, if you had to quantify “performance” of the various public colleges in your states, how would you do it? What measures would you use?