Tuesday, December 03, 2013

 

What if Student Learning Counted in Performance Funding?



What if student learning counted as a metric in performance funding?

Okay, that’s wonky.  To translate: right now, many states are either using or considering a formula to determine funding levels for public colleges that would tie funding to “performance” along some prescribed set of measures.  I’ve seen relatively simple proposals, such as funding based simply on the number of graduates, and I’ve seen much more sophisticated and complex ones, such as the multivariate formula that Massachusetts applies now to community colleges.  (It doesn’t apply performance funding to UMass, though.  You may draw whatever conclusion about that you wish.)

Achieving the Dream, Complete College America, and a host of other foundation-funded actors have proposed a set of possible measures with the goal of encouraging colleges to focus on the moments that matter most for student success.  For example, setting “milestones” at fifteen and thirty credits toward graduation can encourage colleges to focus on the crucial first year.  It also softens the blow from students who choose to transfer to a four-year school after a single year.  

Some of the more thoughtful formulae, such as the one in Massachusetts, also include “weighting” to avoid certain perverse incentives.  For example, one easy way to goose your graduation numbers would be to casually exclude high-risk students.  By giving extra credit for high-risk students who graduate, the formulas can push back against certain sorts of institutional gaming.

But every formula I’ve seen relies on proxies for learning, typically in the form of credit accumulation.  They assume that retention and completion amount to evidence of learning.  And in a perfect world, they would.

In this imperfect world, though, that’s a considerable leap of faith.  Grade inflation can boost retention and graduation numbers in the short term, leading to a false conclusion that learning has improved.  That can happen through conscious policy, subtle cultural pressure, or simply a collective decision to default to the path of least resistance.  

Even in the absence of grade inflation, there’s a leap of faith in moving from mastery of individual course content or tasks to mastery of higher-level skills.  The usual argument for the liberal arts -- one that I largely believe -- holds that the study of seemingly irrelevant topics is valuable for the broader skills and outlooks they can impart.  That’s true whether the seemingly irrelevant topic is literary, historical, or even mathematical. The course I took on Tudor and Stuart England was a hoot, but I don’t draw many direct management lessons from Charles II.  Whether I developed a subtler and more ineffable (less effable?) sense of how power works is harder to say.  

If the purpose of education is learning -- as opposed to signalling, say -- then the relatively uncritical acceptance of such porous proxies for performance seems odd.  (“Porous Proxies” would have been a great name for a 90’s indie band.)  You’d think that if learning were the point, we’d measure that.  But that’s hard to do, especially at the college level where study is far more specialized than in high school.  To the extent that the lessons learned are “ineffable,” they’re tough to measure, by definition.  And it’s hard to shake the suspicion that the real driver of “performance” funding is anxiety about jobs, which is ultimately a function of economic policy decisions made in other places.  If it weren’t really about jobs, then flagship universities would be under the same scrutiny as community colleges.  They aren’t.   

The move to “competency-based” degrees is one way to address the issue of learning.  In a competency-based college, students get credit when they show that they know something or can do something.  The idea is to bypass the proxy measures altogether, and to measure the goal directly.  I can see it working brilliantly in many applied fields, though I admit not being entirely sure how it would work for Tudor and Stuart England.  There’s a danger -- largely theoretical at this point, but still -- that competencies could become Procrustean, cutting down curricula to things that lend themselves to checklists.  Still, the concept is in the very early stages of execution, and I’m hopeful that it will get refined over time.  Eventually, it could conceivably offer a way to base performance funding on actual learning.  We’re not there yet, but we could be.

If performance funding were based on some sort of measures of student learning, I wouldn’t be at all surprised to see some pretty radical shifts in who gets what.  At the end of the day, that may be the strongest practical argument against it.  And that would be a shame.

Wise and worldly readers, what do you think would happen if we based performance funding on student learning?

Comments:
Assessments are still proxies. You aren't measuring learning, you are measuring performance. You hope by measuring the latter you can model the former. What you will get is what is happening in many K-12 schools across this country, a greater focus on teaching to the assessment tool. As you use assessment tools for evaluation the validity of the tool decreases. Systems get gamed. As you create assessment tools for larger numbers of people, the easier they become to game. The greater the emphasis on the assessment tool, the greater motivation to game the system.
 
But then the question becomes, "competent for whose purposes?"

Communications majors can end up doing technical writing (clarity and completeness would be highly valued) or marketing communications ranging from telemarketing to web content (pizazz and persuasiveness regardless of accuracy highly valued).

Same with Math. Students can end up needing to be highly competent in a tightly limited range of operations if they're aiming to be Accountants or nurses; or broadly competent in a wider range of analytical situations (many other STEM fields).

I fear that assessment of competencies would really gum up the works.
 
I think this is THE issue that we need to address in the student success world. I've got a colleague who likes to say, "There's nothing wrong with teaching to the test, as long as it's a good test." In the end, there's always a test. And people will always "teach" to it. It's survival of the fittest to see who can get the resources.

If the metric we assess schools by is completion, then math & English will forever be "barriers". Topics like the humanities will always be considered a waste of time for people going into the trades.

Administrators & funders don't like the idea of measuring learning outcomes because it's messy, difficult, and involves difficult discussions with faculty. Faculty don't like it, because it decreases their autonomy.

The problem is that you can't separate student learning from grades from completion. You can work on helping students complete through nonacademic avenues, but that's only going to go so far. Eventually, things like Connecticut & Florida happen. Those faculty who want to be left alone lose their jobs.

Rather than having a genuine, difficult discussion about what a college educated student should know (and the even harder discussion about how to measure it), we get different groups of people talking past each other. "Everybody needs to know algebra, because it helps you think better" vs "Most people don't need to know algebra, and the math they teach in community college is algebra, so people don't need community college math. Let's get rid of it." I know similar conversations are happening in writing. We need to have the conversation about what all college students need to know. And it needs to be motivated by money or it won't happen.

The alternative is a society in which community college graduates are great workers, but horrible citizens. To me, that means the further death of the middle class.
 
IMNSHO, "learning" is evidenced by still being able to do something (or quickly recover that ability without outside help) some time after the class or degree ends. No current system that I know about measures this or considers using it to evaluate prerequisite courses or programs.

This could be done on a small scale (performance on first test in 2nd semester calculus as an evaluation of 1st semester learning) or a larger one (performance on papers in junior-level classes as an evaluation of freshman comp).

Secondly, don't be so quick to dismiss "signalling". I suspect that many jobs require degrees for that reason alone and those employers might be the ones pushing for higher completion rates to get a larger pool of trainable persons at a profitable pay rate. They don't care what the student learned, except perhaps to come to class on time. It might be useful to really press employers about what they want students to have learned from completing their degree.
 
Since adequate grades are required to continue in a program, and since teacher grading is still superior to every other assessment system devised...

...completion is pretty obviously the best assessment of outcomes. If completion is a bad assessment of outcomes, it is because management has incentivized bad grading practices. Adding another test wouldn't change management practices; they're doing what they're doing for a reason.

 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?