Sunday, May 05, 2013

Measuring Success



How do you know when a college is doing a good job?

The traditional answer was either “by reputation,” or “because the faculty tells you so.”  But those are both flawed.  The former is based on a host of factors having little or nothing to do with teaching and learning -- the fate of the basketball team, say -- and the latter is beset by a basic conflict of interest.  “We’re experts -- just ask us!” doesn’t convince most laypeople, nor should it.  

But if we don’t use those standards, what should we use?

U.S. News uses “inputs,” such as money spent per student, or volumes held in a library.  But spending money and getting results are two different things.  In fact, there’s a valid argument to be made that getting better results with less money reflects good management, even though it would get punished in the U.S. News ratings.

We could use the results of “outcomes assessment.”  That has been the goal/fear of many in higher education for over a decade.  But outcomes assessment doesn’t lend itself to such broad-stroke judgments.  If it’s done with an eye towards generating useful results for actual improvement, then it tends to be locally defined, and therefore resistant to cross-institutional comparisons.  If it’s done with a blunt instrument, like a standardized test, then it will tend to miss most of what colleges actually do.  If the test is low-stakes, then students may not buy into it.  If it’s high-stakes, then it will generate the same “teaching to the test” and widespread cheating that NCLB generated at the high school level.  And although we don’t have an elegant language for talking about it, colleges with high entrance standards will always, inevitably, score higher on standardized measures than will colleges with open doors.  That’s by design, and it does not reflect on the quality of work done by either college.  Give me the same entering class as Swarthmore, and I’ll show you some damn good test scores.  At a certain level, we’re still measuring inputs.

Alternately, we could go with graduation rates.  But that, too, is flawed at best.  Graduation rates have a lot to do with student goals, for example.  Any community college administrator can rattle off some basic flaws of the IPEDS database: it only counts “first-time, full-time, degree-seeking” students, who constitute a minority of community college enrollees; it ignores student intention, so a student who only intends to spend a year at a cc before transferring counts as a dropout; and it stops counting quickly, so students who switch to part-time status are counted as failures, even if they graduate.  I’m not saying this out of sour grapes: Holyoke has one of the highest graduation rates among cc’s in the state, despite being in the lowest-income city in the state.  We punch well above our weight.  But a flawed measure is a flawed measure.

We could go with starting salaries and/or job placement rates: since the Great Recession started, that has been the political favorite.  But that tells you much more about the local economy than it does about the college.  New graduates will have a much easier time finding work in New York City than in Buffalo, regardless of how well their college taught them.  Community colleges with strong “transfer” identities will suffer in the comparison, even though their graduates are actually setting themselves up for long-term success.  (A college junior on a pre-med track isn’t making much yet.)  And the programmatic mix at a college will have much more impact on starting salaries than will gradations of quality within each program.  Nursing grads will start at higher salaries than will journalism grads, even if the journalism program is really good.  

At an even more basic level, asking how well a college is doing presupposes knowing what it’s supposed to be doing.  Research universities have different missions than do community colleges: the former is supposed to do cutting-edge research, and the latter is supposed to help everybody.  The former is based on meritocracy, and the latter on democracy.  Yes, colleges have mission statements, but they tend to be broad and vague.  In the absence of a relatively robust definition of a given college’s mission and place in the academic universe, it’s far too easy to get the measure wrong.  As Einstein supposedly put it, you don’t judge a fish by its ability to climb a tree.

As “performance funding” measures catch on, the stakes of this topic are moving from reputational to financial.  This stuff matters.

So wise and worldly readers, I turn to you.  How do you know when a college is doing a good job?