Sunday, August 25, 2013

 

What If We Studied Success?



The more I’ve reflected on President Obama’s plans, as described last week, the less I like them.  Rating community colleges assumes that students choose among many; in practice, community colleges are usually defined by geography, and few students have more than two from which to choose, for all practical purposes.  Rather than pitting them against each other, it would make more sense to lift all boats.

Having said that, though, I’m quite taken with the chart in this piece from Brookings.  Beth Akers and Matthew Chingos did a basic regression analysis using the fifteen largest public universities in America.  The chart shows the amount by which each university either overperformed or underperformed its demographics, using six-year graduation rates as the base measure.

It’s an admittedly partial picture, but it gets at the “sabermetrics” I invoked last week.  Given the socioeconomic profile of your students, what should your grad rate be?  By that measure, the University of Michigan - Ann Arbor and the University of Central Florida have some work to do, but Michigan State and Rutgers are punching above their weight.  

Presumably, similar analyses could be run for institutions in different sectors.  Which community colleges are doing better than their demographics would lead us to expect?  Which four-year public colleges?  For that matter -- and this would be very interesting -- which for-profits?

The real issue is what to do with the information once we have it.  Certainly I’d want to see multiple measures, since any single number is subject to all sorts of distortion.  For example, a graduation rate by itself could reflect excellent teaching or grade inflation or an unusual program mix or an exogenous shock.  Ideally we’d have some sort of measure of actual learning.  In the absence of that, though, it would help to have a more nuanced blend of metrics that would lessen the impact of any given anomaly.

Then, once we have that, I’d love to see the Feds pony up some money for serious comparative studies.  What is, say, Rutgers doing that the U of Michigan isn’t?  In my perfect world, the point of that kind of study would be to extract useful lessons.  What are the consistently high-performing colleges in each sector doing that their peers could learn from?  (Admittedly, the for-profits might not want to participate in that, since they compete with each other.  But it’s worth asking.)  What are the most impressive community colleges doing that other community colleges could adapt?

That kind of study rarely happens now.  The Community College Research Center does heroic and wonderful work, but it’s one place.  Papers at the League for Innovation or the AACC tend to be autobiographical success stories; it’s rare to see or hear systematic examination of underperformance.  Titles III and V fund some wonderful projects, and some cross-conversation occurs among them, but the comparisons are not systematic, and nobody particularly wants to admit struggling.  Gates and Lumina don’t fund comparative work, as far as I’ve seen.

The beauty of an approach like this is twofold.  It’s cheap, and it’s egalitarian.  It would use documented difference in performance to lift all boats, rather than to decide more efficiently who to starve.  In that sense, it’s much truer to the mission of public higher education than a sort of Hobbesian war of each against all.  Deploying a squadron of sociologists to improve public higher education in America strikes me as public money well spent.  Far better to do that than to set colleges at each other’s throats, gaming statistics to make next year’s payroll.

Comments:
Those are really interesting data!

And it isn't hard to believe that a highly selective university like Michigan should expect 95% of its students to graduate in six years based on test scores, but it is hard to imagine how you could keep a few percent of your students from dropping out because of high cost to out of state students or time spent going to football parties.
 
What is the variation over time? How much of the difference between Michigan and Rutgers is normal noise?
 
I would contact the Gates Foundation and ask them about funding this kind of research. They have a program that looks a developing innovative ways to help with student success. Maybe they would be interested in helping to develop a scorecard to help measure where students are doing the best and help those schools that are struggling to partner with success stories to improve rates of completion.

Program Manager info here: http://www.gatesfoundation.org/What-We-Do/US-Program/Postsecondary-Success/Strategy-Leadership
 
It's too easy to game the system (push the kids towards easy to complete degrees, make courses easier, make passing easier), there are too many variables that effect completion that can't be accounted for and it will send the wrong incentives (kids come out with the wrong degrees for what work is available, kids don't know enough to be employable even with the right degree).

The main problem will be:
raw data in -> black box transformation -> ranking data out
and very few people will know or be able to decide whether the black box transformation did anything useful of fair ... while the rankings make huge impacts on institutions.

 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?