At a meeting this week, I saw two articles of faith crash into each other. I’m trying to sort out the pieces. They were:
High school GPA and course selection are better predictors of success in college than a single score on a placement test.
Hiring more staff (“administrative bloat”) is bad.
The two conflict, because collecting, interpreting, and applying high school transcripts and other forms of information is much more labor-intensive than simply getting a test score from a machine. Selective universities and colleges have relatively large admissions staffs in order to sort through and compare these things. We don’t. We’ve never had to.
If we want to improve placement, we need to hire staff. The cost comes before the benefit, making it a hard sell.
Alternately, of course, we could go “full California” and just go with student self-reported GPA. John Hetts did a presentation on that a couple of years ago that showed excellent results from using student self-reported GPA for placement. But I can’t imagine that model gaining acceptance here without some sort of mandate. It’s a bit too radical for most.
If every high school used the same grading system, it would be relatively straightforward. But they don’t. Some use 1-4, some use 1-5 (weighting for honors classes), some use 1-100, and some use A-B-C-D-F. Each one calculates GPA slightly differently.
The Accuplacer survives as a placement instrument not because it lives up to its name, but because it’s easy and cheap to deliver quickly at scale. Getting to something more accurate would require spending more money and person-hours upfront, for an improvement that would be difficult to quantify for some time. That makes the payback hard to measure against other possible uses of limited resources.
The core of the issue is that improvement sometimes requires investment. Or, to put it more bluntly, money.
Bailey, Jaggars, and Jenkins’ book Redesigning America’s Community Colleges notes that many of the most effective interventions reduce the cost per graduate, but raise the cost per student. If you’re funded per FTE, and funding is tight, that creates a cruel dilemma. We know several changes that would make significant differences, but each of them has a non-trivial cost that comes before the prospective (and unquantified) benefit.
When you have several possible interventions that carry similar upfront costs, and very little money with which to work, it would be helpful to be able to compare the expected payoff of each. But we’re not just there yet.
So, this one is a little more “inside baseball” than I usually go, but hope springs eternal. Has anyone found a reasonably effective way to do multi-factor placement at scale when you can’t hire a bunch of new staff people to evaluate transcripts?