This story should be required reading for every state legislator in the country.
Far more community college students transfer prior to completing the Associate’s degree than actually complete first. According to a new report from the National Student Clearinghouse Research Center, about 350,000 transfer before completion, compared to about 60,000 who complete first.
That matters in several ways.
Most basically, it suggests that measuring community colleges by their graduation rates misses the point. A student who does a year at Brookdale before transferring to Rutgers, and subsequently graduating, got what she wanted, but she shows up in our numbers as a dropout. In states with “performance funding,” the community college could be punished for her decision, even if it was what she intended to do all along.
It also contributes to an ongoing stigma against community colleges. People who only look at “headline” numbers, and don’t bother with the asterisks, look at graduation rates and assume that something is going horribly wrong. But a ratio of 35 to 6 is such a honker of an asterisk that failing to account for it amounts to misrepresentation. (And that’s before considering the impact on the IPEDS cohort of the spread of dual enrollment and early college programs, among other things.)
The Voluntary Framework of Accountability helps to compensate for that, but the IPEDS rate remains the coin of the realm among non-specialists.
My preferred measures of community college performance would be based on actual student behavior. For example, does the percentage of bachelor’s grads in a given area with community college credits roughly match the percentage of undergrads who are enrolled at community colleges? (Nationally, it does.) If so, then the idea of community colleges as dropout factories is hard to sustain. For programs not built around transfer, how are the employment outcomes? I wouldn’t look at loan repayment rates, just because the percentage of students with loans is so low; it’s a skewed sample. I would look at achievement gaps by race, sex, age, and income. I would look at ROI for public investment, as well as at local reputation.
If we wanted to get really specific, I’d hire the folks who calculate the “wins above replacement” statistic for baseball players and have them apply something similar to colleges. The WAR number takes into account the productivity of a given player, and compares it to the productivity of the average replacement player for that position. (For baseball geeks, I’d expect Mike Trout’s WAR to be strongly positive, and Jose Reyes’ to be strongly negative, even without accounting for his pitching.) Applied to colleges, a similar number could calculate expected results for a given set of students with a given academic profile, then measure whether a college exceeds, meets, or trails that result. That’s very different from a raw graduation rate, because it accounts for the profile of students coming in. Selective colleges screen out anybody high-risk, so their entering freshmen should be expected to do very well, almost regardless of what the college actually does. Open-admission colleges take on much riskier students, which suggests that the expected outcomes would be different. Failing to account for that is a basic measurement error.
Of course, any measure applied to community colleges should also be applied to the rest of higher education. Without that, there’s no basis for comparison. Parity of funding, parity of measures; fair is fair.
Better than that, I’d like to see similar ROI analyses applied to, say, regressive tax cuts. Fair is fair. Compare the social payoff of low marginal taxes and public sector austerity to the social payoff of progressive taxes and a robust public sector. Because if we’re serious about improving institutional performance at scale, we’ll need to provide resources to do it.
That’s a long-term goal. In the short term, I’d be happy just to see policymakers and opinion leaders connect the dots and stop taking the headline graduation rates literally. The level of error is far more than we should ask an asterisk to carry.