One of the recurrent themes at the AACC panels was the difficulty in measuring the relative success of a community college, and the utter lack of rewards for that success. In the absence of either appropriate measures or incentives, judging performance – and encouraging good performance – is far harder than it ought to be.
At a panel comparing for-profit colleges to cc's, and asking about lessons that cc's could draw from the for-profits, the presenter noted in passing that the for-profits are built for growth and reward growth. Neither is really true of most cc's. For example, since all of the for-profits' revenues derive from tuition – much of it in the form of financial aid, but tuition nonetheless – enrollment growth equals revenue growth. They don't run programs that they don't expect to become and remain profitable.
In the cc world, since tuition covers less than the cost of production, it's not unusual to have a disconnect between enrollment growth and revenue growth. During recessions, for example, our enrollments typically increase at the same time that our external funding gets cut. Although many academics routinely decry the idea that colleges should be run like businesses, businesses at least understand the concept of 'investment.' If you want to grow, you have to invest. Since public institutions' budgets are largely independent of growth, growth actually registers on the ground as a burden. The consequences for things like 'customer service' are predictable. So instead of serving the folks who need us when they need us, we close sections, establish waiting lists, and outsource much of our core function to temps (adjuncts).
In a wonderful panel on which criteria to use to judge community colleges, Gail Mellow, President of LaGuardia Community College in New York City, suggested a measure I've never seen used: percentage change in family income one to three years after graduation. (She cited a figure of 17 percent for her college.) She was very specific about 'family' income, as opposed to individual income. Although I'm not quite sure how that would work for transfer programs, it seems like a nifty measure for career or vocational programs. We don't collect that information locally, and judging by the show of hands at the panel, almost nobody else does, either. But something like that – she also suggested “return on investment,” a relatively straightforward business concept – would allow us to talk about student success without any threat of watering down standards, since employers would stop hiring graduates of a program that produced incompetent people. (When I was at Proprietary U, the Career Services office had excellent data on graduates' starting salaries, and actually shared it with the folks in Admissions to use as a recruiting tool. It can be done.)
Much of the discussion at that panel centered on the crushing lack of good data by which to make decisions or comparisons. For example, the IPEDS database – the Federal standard – looks at graduation rates only of first-time, full-time freshmen. That works fairly well at the Swarthmores of the world, but it's comically inappropriate for most community colleges, since those students are a small minority (though a growing one) of who we get. In addition, a student who spends a year at a cc, transfers to a four-year school, and graduates, shows up in our numbers as attrition. That's a fundamental, if fixable, flaw in the data, but it's held against us.
The problems are basic and severe: we're punished for growth, the usual measures of success don't make sense in our context, and we can't get good data for measures that would actually make sense. But at least some very smart people are recognizing those problems, and starting to address them. A smart fellow once wrote that freedom is the insight into necessity. We're starting to get insights into our necessities. That's not much, but I'll take hope where I can find it.