Friday, September 09, 2011

 

The Right Measures

I’ve been following with interest the stories about the Federal government trying to decide which measures to use to judge the performance of various community colleges against each other.

Unfortunately, it appears that some of the measures chosen are far too simplified to give good information.

For vocational programs, placement rates and starting salaries may make some sense. Those are generally more reflective of the overall market than they are of the performance of any given program, but it’s easy to argue that a job prep program that doesn’t result in jobs doesn’t have much reason to exist. (Of course, the same argument could be applied to many Ph.D. programs...) For transfer programs, though, we’d need different measures.

Graduation rates are notoriously unhelpful, since they assume, falsely, that every student starts as a freshman and intends to complete a degree. We get a fair number of “visiting” students from other colleges who take a few courses here en route to graduating at their “home” colleges. To count that as attrition for us would be asinine. We get a significant number of students who intend to do a year at the cc and then transfer; they’re usually the kids who had spotty records in high school, and who have been told by Mom and Dad that they have a year to put up or shut up. Success, in those cases, means leaving the cc after a year. Again, that shows up in the stats as attrition, even though the kid went on to get a degree elsewhere. (The close variation on this is the involuntary reverse transfer -- the kid who drank his way to a terrible GPA at a four-year college, and who arrives here seeking redemption. Those kids often don’t bother graduating here before heading back from whence they came.)

In the wrong hands, too much of a focus on graduation rates could also lead to pressure to move the academic standards downward, thereby defeating the purpose of college in the first place.

Some of the confounding variables are less obvious. Looking at comparative cc graduation rates by state, I’m struck that the states with the highest rates generally have the least viable four-year sectors. (Check out the charts at this link. Many of the states with the highest four-year college graduation rates have the lowest two-year college graduation rates, and vice versa.) That makes sense, if you think about it. In an area in which the cc is the only game in town, high achievers who don’t want to move away will attend the cc. In areas with high concentrations of four-year colleges, those same high achievers will skip the cc. The different student mix at the cc’s in the different states will result in different graduation rates, independent of anything the cc’s do or don’t do.

If the technology and privacy issues could be addressed, I’d like to see a measure that shows how successful cc grads are when they transfer to four-year schools. If the grads of Smith CC do well, and the grads of Jones CC do poorly, then you have a pretty good idea where to start. That would offset the penalty that otherwise accrues to cc’s in areas with vibrant four-year sectors, and it would provide an incentive to keep the grading standards high. If you get your graudation numbers up by passing anyone who can fog a mirror, presumably that will show up in their subsequent poor performance at destination schools. If your grads thrive, then you’re probably doing something right.

Finally, of course, there’s an issue of preparation. The more economically depressed an area, generally speaking, the less academically prepared their entering students will be. If someone who’s barely literate doesn’t graduate, is that because the college didn’t do its job, or because it did? As with the K-12 world, it’s easy for “merit-based” measures to wind up ratifying plutocracy. That would run directly counter to the mission of community colleges, and to my mind, would be a tragic mistake. Any responsible use of performance measures would have to ‘control’ for the economics of the service area. If a college manages to outperform its demographics, it’s doing something right; if it underperforms its demographics, it’s dropping the ball.

I’m not naive enough to think that rankings won’t be used in some basically regressive and/or punitive way. But if we at least want to make informed choices, we should try to get the rankings right. Otherwise we’ll wind up rewarding all the wrong things.

Wise and worldly readers, what measures would you use to gauge the effectiveness of transfer-oriented programs at community colleges?

Comments:
My CC is pretty much doomed if you aren't just looking at graduation rates, but looking at those students which are successful within three years. Somewhere around 80% of our entering students need developmental work, and that work alone often can take them those three years.
 
I work at one of the state uni's designed for transfer/military. our primary population is the transfer student. our scholarships are designed to support the cc graduate. having the aa is a requirement to getting the scholarship. our internal documentation proves that our cc graduates have a much higher persistence rate than non-grads.

we've also started working a reverse transfer program with our cc partners. not much luck but it goes a long way to keeping them from thinking that we are "stealing" grads.
 
I think some attempt to look at improvement over time rather than base numbers might help the discussion. I also think that you should report attrition numbers based on number of units taken – which should filter out some of the students taking one or two classes for fun. I would look at failure rates in individual classes that are key to moving into more advanced curriculum, basic skills classes like English comp and define a group of functional goals for your two student communities (AA and out vs. transfer) and then find a way to test that those goals were being met each year.

In your discussions of this issue of assessing performance, I always think about what it would be like if the accountable care organizations that are becoming part of healthcare were implemented in education. The argument against them in healthcare is that numerous separate businesses are now being held accountable for the outcomes of their patients and hospitals in particular are apprehensive about taking reimbursement hits for care decisions made by individual provider / pharmacy / lab groups. What if a collection of K-12, CCs and 4-years were held responsible for the education of the students in their geographic region of influence? What if there were "never events" that caused reimbursement to be withdrawn from schools and bonuses paid to those who succeeded in providing high quality ed? I think this assessment issue would be easier because the focus would broaden beyond the student's interaction with one school and it would be a more accurate picture of the outcomes that we care about - the education, to the extent needed, of our kids. This would play havoc with the system as it is but it might also force more integration and a sense of shared responsibility that's lacking in the current situation.
 
Confounding variables makes a national comparison of what are fundamentally state or even local variables difficult. Two quick examples: One state with a high AA grad rate has very strong articulation and many state universities that do not accept transfers without the AA. (30 credit transfers are rare.) Anonymous @5:33 has a higher persistence rate for CC grads who are the ones eligible for a special scholarship.

On your question:

The best measure of the success of an AA transfer program is student GPA during the first one or two semesters after transfer. However, this is impossible to capture with punch-card era measures like IPEDS.

Another is that they only count FTIC, which is stupid for a CC. Our biggest "improvement" is straightening out reverse transfer students by getting them adjusted to college-level work.

Within the limits of a system that only has institution-centric data, I would insist that the metric acknowledge that CC students are rarely "college ready". If 6 years is considered the norm for a student admitted to a university, then 3 years should only be the norm for a student who is effectively "admitted" to a CC. I would define that as needing no more than one semester of a pre-college math class. Period. It should be 4 years for those who need no more than one year (that is, 30 semester hours) of remediation. That is what Anonymous @2:57 is talking about.

I won't single out the subset of students who start below 5th grade math, but perhaps they should be identified in the data because allowing 3 years to do the minimum 2 semesters of college math translates into over 7 years to do 5 semesters of math.
 
This comment has been removed by the author.
 
Based on what Anon 5:33 said, I'd agree you should try to compare apples to apples (i.e. get some measure for how much the is degree per se is incentivized).

That said, it may be hopelessly cumbersome, but I think that what makes the most sense is simply asking the students their goals when the start, and asking at the end if they feel well-served. If you are doing well on either one of those, count it as a success (though if lots of students met their goals and feel poorly served, you might want to do something, *cough*PhDprogramproblems*cough*).

If people are checking a goal box "demonstrate I can handle college level work without going into debt" it's pretty clear the CC can classify a lot of those individuals as successes, irrespective of degree attainment.

Given that the vast majority of CCs use the Accuplacer or the COMPASS tests, it ought to be very feasible to control for 'how good the students coming in are'. I think the data are there, people just aren't using it. (I'm sure privacy concerns complicate the issue, but really it shouldn't be more insurmountable than tracking healthcare data)

I see what CCPhysicist is saying, but given some of the other blog posts addressing remedial sequences, I'm inclined to disagree. Giving more time will just lead to more attrition. It's fair to judge a college by how efficiently they get folks through that remediation.
For the purposes of full disclosure- having taken the COMPASS, I have trouble imagining not being able to pass out of most of the remedial courses. So it's possible I don't understand those students needs at all.

(NB- lets see if I got my comment right this time...)
 
This is Anon @5:33am. to add information. We have articulations with all of the state cc's that guarantee acceptance of all AA credits (d or better) if they receive the AA. They are applied as a block. We transfer in up to 70 credits from CC's. Routinely, I see students bringing in the limit of transfer credit. Our scholarship is great and we often run out of money but the vast majority of our cc students completed their AA regardless of whether or not they get the scholarship.
 
Becca, one thing we (particularly our Dean) emphasize to new adjuncts is that students enrolling at a CC are "nothing like us".

We certainly MEASURE how good students are when they come in, but we cannot CONTROL which of them come in. We are open admission (with a few exceptions that would blow your mind), which is why I argue that there is no way you can apply that criteria (3 years to do 2 years) to CC students as if they passed some admission threshold.

Your judgment of the level of the placement exams is accurate, but you did not graduate below the median for a HS class from a marginal school in this state. Until I started meeting lots of these students at orientation, I had no idea what skills go with being in the bottom half of a HS graduating class, not to mention a dropout with a GED. Too many rival ESL students for reading English, and many more cannot do anything that resembles HS algebra. Have you ever talked to a student whose SAT total is in the 600s? The few from this group that do make it will need more than three years to finish.

I like the idea of identifying a goal. That could help with a major confounding variable: students who think an AS in construction is really an AA to become an engineer (or vice versa) so they are misclassified in the FTIC data.
 
Becca, one thing we (particularly our Dean) emphasize to new adjuncts is that students enrolling at a CC are "nothing like us".

The period goes inside the quotation mark. CC students are often like adjuncts, in that both tend to be low-wage laborers.
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?