I’ve been following with interest the stories about the Federal government trying to decide which measures to use to judge the performance of various community colleges against each other.
Unfortunately, it appears that some of the measures chosen are far too simplified to give good information.
For vocational programs, placement rates and starting salaries may make some sense. Those are generally more reflective of the overall market than they are of the performance of any given program, but it’s easy to argue that a job prep program that doesn’t result in jobs doesn’t have much reason to exist. (Of course, the same argument could be applied to many Ph.D. programs...) For transfer programs, though, we’d need different measures.
Graduation rates are notoriously unhelpful, since they assume, falsely, that every student starts as a freshman and intends to complete a degree. We get a fair number of “visiting” students from other colleges who take a few courses here en route to graduating at their “home” colleges. To count that as attrition for us would be asinine. We get a significant number of students who intend to do a year at the cc and then transfer; they’re usually the kids who had spotty records in high school, and who have been told by Mom and Dad that they have a year to put up or shut up. Success, in those cases, means leaving the cc after a year. Again, that shows up in the stats as attrition, even though the kid went on to get a degree elsewhere. (The close variation on this is the involuntary reverse transfer -- the kid who drank his way to a terrible GPA at a four-year college, and who arrives here seeking redemption. Those kids often don’t bother graduating here before heading back from whence they came.)
In the wrong hands, too much of a focus on graduation rates could also lead to pressure to move the academic standards downward, thereby defeating the purpose of college in the first place.
Some of the confounding variables are less obvious. Looking at comparative cc graduation rates by state, I’m struck that the states with the highest rates generally have the least viable four-year sectors. (Check out the charts at this link. Many of the states with the highest four-year college graduation rates have the lowest two-year college graduation rates, and vice versa.) That makes sense, if you think about it. In an area in which the cc is the only game in town, high achievers who don’t want to move away will attend the cc. In areas with high concentrations of four-year colleges, those same high achievers will skip the cc. The different student mix at the cc’s in the different states will result in different graduation rates, independent of anything the cc’s do or don’t do.
If the technology and privacy issues could be addressed, I’d like to see a measure that shows how successful cc grads are when they transfer to four-year schools. If the grads of Smith CC do well, and the grads of Jones CC do poorly, then you have a pretty good idea where to start. That would offset the penalty that otherwise accrues to cc’s in areas with vibrant four-year sectors, and it would provide an incentive to keep the grading standards high. If you get your graudation numbers up by passing anyone who can fog a mirror, presumably that will show up in their subsequent poor performance at destination schools. If your grads thrive, then you’re probably doing something right.
Finally, of course, there’s an issue of preparation. The more economically depressed an area, generally speaking, the less academically prepared their entering students will be. If someone who’s barely literate doesn’t graduate, is that because the college didn’t do its job, or because it did? As with the K-12 world, it’s easy for “merit-based” measures to wind up ratifying plutocracy. That would run directly counter to the mission of community colleges, and to my mind, would be a tragic mistake. Any responsible use of performance measures would have to ‘control’ for the economics of the service area. If a college manages to outperform its demographics, it’s doing something right; if it underperforms its demographics, it’s dropping the ball.
I’m not naive enough to think that rankings won’t be used in some basically regressive and/or punitive way. But if we at least want to make informed choices, we should try to get the rankings right. Otherwise we’ll wind up rewarding all the wrong things.
Wise and worldly readers, what measures would you use to gauge the effectiveness of transfer-oriented programs at community colleges?