Friday, September 09, 2011
The Right Measures
Unfortunately, it appears that some of the measures chosen are far too simplified to give good information.
For vocational programs, placement rates and starting salaries may make some sense. Those are generally more reflective of the overall market than they are of the performance of any given program, but it’s easy to argue that a job prep program that doesn’t result in jobs doesn’t have much reason to exist. (Of course, the same argument could be applied to many Ph.D. programs...) For transfer programs, though, we’d need different measures.
Graduation rates are notoriously unhelpful, since they assume, falsely, that every student starts as a freshman and intends to complete a degree. We get a fair number of “visiting” students from other colleges who take a few courses here en route to graduating at their “home” colleges. To count that as attrition for us would be asinine. We get a significant number of students who intend to do a year at the cc and then transfer; they’re usually the kids who had spotty records in high school, and who have been told by Mom and Dad that they have a year to put up or shut up. Success, in those cases, means leaving the cc after a year. Again, that shows up in the stats as attrition, even though the kid went on to get a degree elsewhere. (The close variation on this is the involuntary reverse transfer -- the kid who drank his way to a terrible GPA at a four-year college, and who arrives here seeking redemption. Those kids often don’t bother graduating here before heading back from whence they came.)
In the wrong hands, too much of a focus on graduation rates could also lead to pressure to move the academic standards downward, thereby defeating the purpose of college in the first place.
Some of the confounding variables are less obvious. Looking at comparative cc graduation rates by state, I’m struck that the states with the highest rates generally have the least viable four-year sectors. (Check out the charts at this link. Many of the states with the highest four-year college graduation rates have the lowest two-year college graduation rates, and vice versa.) That makes sense, if you think about it. In an area in which the cc is the only game in town, high achievers who don’t want to move away will attend the cc. In areas with high concentrations of four-year colleges, those same high achievers will skip the cc. The different student mix at the cc’s in the different states will result in different graduation rates, independent of anything the cc’s do or don’t do.
If the technology and privacy issues could be addressed, I’d like to see a measure that shows how successful cc grads are when they transfer to four-year schools. If the grads of Smith CC do well, and the grads of Jones CC do poorly, then you have a pretty good idea where to start. That would offset the penalty that otherwise accrues to cc’s in areas with vibrant four-year sectors, and it would provide an incentive to keep the grading standards high. If you get your graudation numbers up by passing anyone who can fog a mirror, presumably that will show up in their subsequent poor performance at destination schools. If your grads thrive, then you’re probably doing something right.
Finally, of course, there’s an issue of preparation. The more economically depressed an area, generally speaking, the less academically prepared their entering students will be. If someone who’s barely literate doesn’t graduate, is that because the college didn’t do its job, or because it did? As with the K-12 world, it’s easy for “merit-based” measures to wind up ratifying plutocracy. That would run directly counter to the mission of community colleges, and to my mind, would be a tragic mistake. Any responsible use of performance measures would have to ‘control’ for the economics of the service area. If a college manages to outperform its demographics, it’s doing something right; if it underperforms its demographics, it’s dropping the ball.
I’m not naive enough to think that rankings won’t be used in some basically regressive and/or punitive way. But if we at least want to make informed choices, we should try to get the rankings right. Otherwise we’ll wind up rewarding all the wrong things.
Wise and worldly readers, what measures would you use to gauge the effectiveness of transfer-oriented programs at community colleges?
we've also started working a reverse transfer program with our cc partners. not much luck but it goes a long way to keeping them from thinking that we are "stealing" grads.
In your discussions of this issue of assessing performance, I always think about what it would be like if the accountable care organizations that are becoming part of healthcare were implemented in education. The argument against them in healthcare is that numerous separate businesses are now being held accountable for the outcomes of their patients and hospitals in particular are apprehensive about taking reimbursement hits for care decisions made by individual provider / pharmacy / lab groups. What if a collection of K-12, CCs and 4-years were held responsible for the education of the students in their geographic region of influence? What if there were "never events" that caused reimbursement to be withdrawn from schools and bonuses paid to those who succeeded in providing high quality ed? I think this assessment issue would be easier because the focus would broaden beyond the student's interaction with one school and it would be a more accurate picture of the outcomes that we care about - the education, to the extent needed, of our kids. This would play havoc with the system as it is but it might also force more integration and a sense of shared responsibility that's lacking in the current situation.
On your question:
The best measure of the success of an AA transfer program is student GPA during the first one or two semesters after transfer. However, this is impossible to capture with punch-card era measures like IPEDS.
Another is that they only count FTIC, which is stupid for a CC. Our biggest "improvement" is straightening out reverse transfer students by getting them adjusted to college-level work.
Within the limits of a system that only has institution-centric data, I would insist that the metric acknowledge that CC students are rarely "college ready". If 6 years is considered the norm for a student admitted to a university, then 3 years should only be the norm for a student who is effectively "admitted" to a CC. I would define that as needing no more than one semester of a pre-college math class. Period. It should be 4 years for those who need no more than one year (that is, 30 semester hours) of remediation. That is what Anonymous @2:57 is talking about.
I won't single out the subset of students who start below 5th grade math, but perhaps they should be identified in the data because allowing 3 years to do the minimum 2 semesters of college math translates into over 7 years to do 5 semesters of math.
That said, it may be hopelessly cumbersome, but I think that what makes the most sense is simply asking the students their goals when the start, and asking at the end if they feel well-served. If you are doing well on either one of those, count it as a success (though if lots of students met their goals and feel poorly served, you might want to do something, *cough*PhDprogramproblems*cough*).
If people are checking a goal box "demonstrate I can handle college level work without going into debt" it's pretty clear the CC can classify a lot of those individuals as successes, irrespective of degree attainment.
Given that the vast majority of CCs use the Accuplacer or the COMPASS tests, it ought to be very feasible to control for 'how good the students coming in are'. I think the data are there, people just aren't using it. (I'm sure privacy concerns complicate the issue, but really it shouldn't be more insurmountable than tracking healthcare data)
I see what CCPhysicist is saying, but given some of the other blog posts addressing remedial sequences, I'm inclined to disagree. Giving more time will just lead to more attrition. It's fair to judge a college by how efficiently they get folks through that remediation.
For the purposes of full disclosure- having taken the COMPASS, I have trouble imagining not being able to pass out of most of the remedial courses. So it's possible I don't understand those students needs at all.
(NB- lets see if I got my comment right this time...)
We certainly MEASURE how good students are when they come in, but we cannot CONTROL which of them come in. We are open admission (with a few exceptions that would blow your mind), which is why I argue that there is no way you can apply that criteria (3 years to do 2 years) to CC students as if they passed some admission threshold.
Your judgment of the level of the placement exams is accurate, but you did not graduate below the median for a HS class from a marginal school in this state. Until I started meeting lots of these students at orientation, I had no idea what skills go with being in the bottom half of a HS graduating class, not to mention a dropout with a GED. Too many rival ESL students for reading English, and many more cannot do anything that resembles HS algebra. Have you ever talked to a student whose SAT total is in the 600s? The few from this group that do make it will need more than three years to finish.
I like the idea of identifying a goal. That could help with a major confounding variable: students who think an AS in construction is really an AA to become an engineer (or vice versa) so they are misclassified in the FTIC data.
The period goes inside the quotation mark. CC students are often like adjuncts, in that both tend to be low-wage laborers.