Tuesday, December 03, 2013

What if Student Learning Counted in Performance Funding?



What if student learning counted as a metric in performance funding?

Okay, that’s wonky.  To translate: right now, many states are either using or considering a formula to determine funding levels for public colleges that would tie funding to “performance” along some prescribed set of measures.  I’ve seen relatively simple proposals, such as funding based simply on the number of graduates, and I’ve seen much more sophisticated and complex ones, such as the multivariate formula that Massachusetts applies now to community colleges.  (It doesn’t apply performance funding to UMass, though.  You may draw whatever conclusion about that you wish.)

Achieving the Dream, Complete College America, and a host of other foundation-funded actors have proposed a set of possible measures with the goal of encouraging colleges to focus on the moments that matter most for student success.  For example, setting “milestones” at fifteen and thirty credits toward graduation can encourage colleges to focus on the crucial first year.  It also softens the blow from students who choose to transfer to a four-year school after a single year.  

Some of the more thoughtful formulae, such as the one in Massachusetts, also include “weighting” to avoid certain perverse incentives.  For example, one easy way to goose your graduation numbers would be to casually exclude high-risk students.  By giving extra credit for high-risk students who graduate, the formulas can push back against certain sorts of institutional gaming.

But every formula I’ve seen relies on proxies for learning, typically in the form of credit accumulation.  They assume that retention and completion amount to evidence of learning.  And in a perfect world, they would.

In this imperfect world, though, that’s a considerable leap of faith.  Grade inflation can boost retention and graduation numbers in the short term, leading to a false conclusion that learning has improved.  That can happen through conscious policy, subtle cultural pressure, or simply a collective decision to default to the path of least resistance.  

Even in the absence of grade inflation, there’s a leap of faith in moving from mastery of individual course content or tasks to mastery of higher-level skills.  The usual argument for the liberal arts -- one that I largely believe -- holds that the study of seemingly irrelevant topics is valuable for the broader skills and outlooks they can impart.  That’s true whether the seemingly irrelevant topic is literary, historical, or even mathematical. The course I took on Tudor and Stuart England was a hoot, but I don’t draw many direct management lessons from Charles II.  Whether I developed a subtler and more ineffable (less effable?) sense of how power works is harder to say.  

If the purpose of education is learning -- as opposed to signalling, say -- then the relatively uncritical acceptance of such porous proxies for performance seems odd.  (“Porous Proxies” would have been a great name for a 90’s indie band.)  You’d think that if learning were the point, we’d measure that.  But that’s hard to do, especially at the college level where study is far more specialized than in high school.  To the extent that the lessons learned are “ineffable,” they’re tough to measure, by definition.  And it’s hard to shake the suspicion that the real driver of “performance” funding is anxiety about jobs, which is ultimately a function of economic policy decisions made in other places.  If it weren’t really about jobs, then flagship universities would be under the same scrutiny as community colleges.  They aren’t.   

The move to “competency-based” degrees is one way to address the issue of learning.  In a competency-based college, students get credit when they show that they know something or can do something.  The idea is to bypass the proxy measures altogether, and to measure the goal directly.  I can see it working brilliantly in many applied fields, though I admit not being entirely sure how it would work for Tudor and Stuart England.  There’s a danger -- largely theoretical at this point, but still -- that competencies could become Procrustean, cutting down curricula to things that lend themselves to checklists.  Still, the concept is in the very early stages of execution, and I’m hopeful that it will get refined over time.  Eventually, it could conceivably offer a way to base performance funding on actual learning.  We’re not there yet, but we could be.

If performance funding were based on some sort of measures of student learning, I wouldn’t be at all surprised to see some pretty radical shifts in who gets what.  At the end of the day, that may be the strongest practical argument against it.  And that would be a shame.

Wise and worldly readers, what do you think would happen if we based performance funding on student learning?