Wednesday, September 26, 2012

 

Performance Anxiety

My state is starting to make noises about basing appropriations for public colleges on “performance” numbers.  Since it’s fairly clear that we’re talking about reallocating existing money, rather than adding new money to the pot, some campuses stand to get more, and others stand to get less.

Naturally, this has led to some pretty animated discussion about which metrics to use.  There’s real money at stake.

Performance funding sounds great until you try to do it.  In theory, I’m perfectly comfortable with the idea that some colleges do a better job than others, even within the same sector.  And I’m fine with the idea that taxpayers deserve some assurance that public employees are delivering a decent bang for the buck.  My performance as an employee is evaluated (and quantified) annually, so it’s not like I’m unfamiliar with the concept.  But applied to a college as a whole, things get murky.

Simply put, what should count?

Graduation rates seem like an obvious place to start, but they aren’t, really.  The Federal IPEDS definition only includes first-time, full-time students who claim they are seeking degrees.  Returning students, part-time students, reverse transfers, and lateral transfers don’t count.  Students who take “too long” count as dropouts, even if they subsequently graduate.  Students who take a year and then transfer to a four-year college count as dropouts, even if they complete the four-year degree on time.  And of course, a college’s graduation rate has a great deal to do with the students it attracts.  All else being equal, a college dropped in the middle of a slum will have a lower graduation rate than one in an affluent suburb.  Community colleges in states with weak four-year sectors have higher graduation rates than community colleges in states with robust four-year sectors, since in the former case, local high achievers have fewer options. To attribute the difference to college “performance” becomes self-fulfilling.

Some formulae give ‘premiums’ for students from underrepresented groups, STEM majors, or other cohorts that the state wants to encourage.  The idea is to incentivize colleges to do what they can to reach broader social goals.

This strikes me as more promising than a simple graduation rate, but still quite difficult to get right.  Students choose majors; colleges don’t assign them.  If the majors weren’t distributed in a way that resulted in a positive funding outcome, a rational college would redistribute its own internal funding to try to change that.  Frustrate and turn away enough humanities majors, and your STEM percentage increases by default.  

We could take a page from sabermetrics, and try to look at “value added.”  This was the approach taken in Academically Adrift, and it formed the basis of the claim that roughly half of college students don’t improve their critical thinking skills in the first two years of college.

But I literally can’t imagine the selective institutions going along with that.  If many of their students arrive already competent, then it’s hard to add much value.  I suspect they’d kill this initiative in the crib.

We could take a page from Achieving the Dream, and use milestones to completion as the relevant measures: completion of developmental courses, completion of 15 credits, etc.  Again, there’s some appeal to this, but it doesn’t control for different demographics.  And under a desperate or clueless local administration, it could easily result in not-subtle pressures to just pass students along, regardless of performance.

And the entire enterprise seems a bit silly when you compare it to other public services, like, say, firefighting.  Should more effective fire departments get more funding than less effective fire departments?  Or would that just make the less effective ones even worse?  And how would we define “effective,” anyway?  “There’s been a wave of arsonists in the city.  Clearly, the fire department is loafing on the job.  Let’s cut their funding!”  Um...

Sometimes, poor performance can be a product of a lack of funding.  When that’s the case, basing funding on performance ensures a death spiral.  Which I sometimes think is the point.

Wise and worldly readers, if you had to quantify “performance” of the various public colleges in your states, how would you do it?  What measures would you use?

Comments:
Our state university system has performance-based funding. "Pass students along, regardless of performance" may as well be our new mission statement.
 
The push to allocate funding according to performance stems from anxiety that there's a lot of funding going to support aimless (or seriously underprepared) students with no hope of finishing any program at the home institution or elsewhere. For all the reasons you point out, existing metrics do not capture the many students who do complete a program. They also don't capture the other benefits that are being delivered, not least of which is opportunity that is owed even if the student can't make it in the end. My hunch is that there is no way to capture global performance at the CC level -- or any other level of post-secondary, in a way that fairly compares one institution with another. However, there are several actions at the margin that could bolster public confidence and maybe remove some of the pressure for performance comparisons. One would be to put tighter limits on perpetual students. Another would be to figure out a way to stop the practice of students registering in order to get Pell grants, the excess of which can be used for living expenses. A third is something that's being tried and should be tried more: heavy duty advising for students who come in w/o a plan. Not the advising that is currently being done, which is mostly about signing students up for Gen Eds or remedial classes.
 
Performance-based appropriations is one of those things that at first sight sounds like a good idea, and it looks pretty good on a bumper sticker or in a political campaign. However, the basing of appropriations for public colleges on performance can potentially have some rather perverse effects.

If the amount of money a college gets from the government depends on how well they perform on a certain set of externally-imposed metrics, the college is certainly going to make every effort to satisfy these metrics, no matter what they are. Eventually, the college will forget about the reason why it is there in the first place, and the entire organizational structure will evolve into something that will be driven primarily by meeting these externally-imposed performance metrics.

The primary goal of the college will no longer be to educate the next generation of students—it will instead be to satisfy these externally-imposed performance metrics. A lot of these performance metrics will probably be based on meeting some set of numbers, and the primary goal of college administrators will to make sure that they meet these numbers. If a college somehow screws up and misses its numbers, the threat of losing its funding will be held over its head. Teachers will be continually hassled and harassed about “meeting the numbers”, and the administration will ride herd on them to make sure that they are not doing anything that will cause the college to fail to meet its numbers. The college will become obsessed with the requirement of meeting the numbers and keeping the funders happy, which will require that more and more administrators be hired, that more and more assistant and associate whatevers need to be added to the rolls, and that more and more staff will be dedicated primarily to the task of ensuring that the college meets its numbers and that the performance metrics will be met. Colleges will become more and like corporations, which are driven by the requirement to keep the stock price high and to keep the financial analysts happy.

Another danger is that college administrations may be tempted to “juke the stats”, that is, to fake the numbers or simply make up the results, just to keep the school out of trouble. If my funding depends on how well I do on some set of externally-imposed metrics, I am certainly going to make sure that I satisfy those metrics, by hook or by crook. For example, if the performance metric is based on graduation rate, the school might be tempted to lower graduation standards and allow students to graduate regardless of how well they do in their classes. If the metric is based on student retention rate, the college may be tempted to pressure faculty to give just about all of their students a passing grade—if a teacher fails too many students, they could be in big trouble. If the performance metric is based on how many students there are in STEM fields, the school will be tempted to diss or defund their humanities programs or eliminate them altogether. If the metric is based on student performance on a standardized test, there will be pressure on faculty to teach to the test or to simply make up the numbers.

 
The only metric I care about is, are the graduates employed? If they are, mission accomplished, from a public perspective. Are they well educated, satisfied and fulfilled? Did they find a job in the field that they wanted? I hope so, but that's not a public issue, that was up them to make the most of the opportunity.

Trouble is, you don't have access to the employment data, or if you do, it is self reported and skews to the successful.

I think of this because number 3 son graduated from a snooty liberal arts college a few years ago, and I joined him in attending an alumni event in our area. The college president spoke, and did a great job. Son mentioned "he never talked to us like that when we were students." Point is, he said the college is now tracking the employment success of its graduates rather aggressively. Their stats are pretty darn good, and they will use them in their recruitment.

The only reason for 99% of the people who go to any college to be there is to improve their employment prospects. That's what needs to be measured.
 
Welcome to NCLB - college edition. Have fun with that!
 
Speaking as a conservative and mostly Republican, I would dearly love to see NCLB repealed.


 
Fire, Ready, Aim.

Anyone who does scientific experiments knows that the "science project" version of science is totally bogus. Real data analysis is a cycle where the first few runs serve only to refine the experiment after you figure out what the problems are and what data you really need to collect.

This is no different.

One example. My college has an outstanding collection of data, but it still can't answer some important questions because the relevant data are not collected. We have students who enroll only for one semester as FTIC freshmen in order to meet a provisional admission option at a nearby university. They pass and leave. Success? No. They show up as failures in our IPEDS data, and we don't even have a tag to tell us who they were so we can measure how well this program works for our own use.

Any performance-based system needs years of dry-run data before you know what to measure and whether you are creating counterproductive pressures in the system. For CC transfer students, the only real measure is how they do after transfer, not how many are GIVEN a C in their gen-ed classes. Those are not easy data to collect, however, so it is easier to create perverse incentives that make education worse rather than better.
 
Class War Class War, watcha gonna do? Watcha gonna do when it comes for you?
 
The only metric I care about is, are the graduates employed?

By this metric a doctor who is driving a cab, or someone with a PhD working at a fast food joint, counts as a success.
 
I satisfy those metrics, by hook or by crook.
Cheap Flights To Manila
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?