I owe John Hetts a thank you for a well-timed reminder this week on Twitter.
Taking a recent report on the spread of multi-factor placement as a jumping-off point -- and noting that multi-factor placement is now used at a majority of community colleges across the country -- Hetts noted that a statistic like that would have been unthinkable five years ago.
It’s a little bit like watching your kids grow. On a day-to-day level, you barely notice it. But a photo at 10 and a photo at 15 look unmistakably different. You just have to step back a bit to see it.
Which is fitting, really. Data and statistics across time and large sample sizes are a way of stepping back from what’s right in front of you, and noticing patterns that might not be obvious if you stand too close. In this case, anecdote accomplishes something similar for folks who are often buried in data.
In my first community college administrative position, which I held from 2003 to 2008, I don’t recall ever hearing a single discussion about graduation rates, retention rates, or achievement gaps. If you had asked me what the college’s graduation rate was, I couldn’t have told you. And that wasn’t just me; those topics simply weren’t on the institutional radar. We talked about money, and articulation agreements, and outcomes assessment, and there was no shortage of personalities and all that came with them, but we really didn’t look at institution-level student outcomes. Outcomes assessment was confined to the classroom.
That seems bizarre now, but at the time, it was simply the way it was. I remember attending a conference and hearing somebody ask whether we, in the audience, knew the longitudinal success rates of students who started out in remedial classes. Not only didn’t I know, I had never even been asked the question before. I remember being embarrassed at not knowing, but when I got back to campus, it just wasn’t a topic on the table.
It’s a different world now. I’ve devoted entire faculty meetings to discussions of achievement gaps, and have made closing those gaps the central focus of the academic master plan. And I’m not alone in that. At this point, anyone in an academic administrative role at a community college should be well-versed in the basics of their local data.
Of course, knowing and doing remain two separate enterprises. Multi-factor placement provides an easy example. The idea behind it is that inaccurate remedial placement can do more harm than good, so finding ways to prevent students who don’t really need it from being placed into it is the right thing to do. Placing students into remedial coursework based on a single test score, such as the Accuplacer, leads to a lot of wasted time and money, and some unnecessary attrition. A given student can get a misleading score on a given day for a host of reasons. Better to look at other factors, such as high school coursework and GPA. Those indicate performance sustained over time. And the folks who study these things -- including Mr. Hetts -- have found that previous performance in school is a better predictor of future performance in school than is a single standardized test. That’s probably why selective colleges put more stock in high school performance than in the SAT.
The first time an idea like that is introduced on campus, though, the first response is typically some version of “but that’s watering down standards!” The objection is based on the fallacy of false precision, but it carries emotional weight. It can take a while for people to get past that. That has been the work of the last several years. It’s halting and uneven, and it occasionally falls prey to local political disputes based on other agendas. But it’s happening. The gradual accretion of evidence that comes from asking the right questions is becoming harder to ignore. Now, apparently, a majority of community colleges has made the leap. That’s wonderful news.
Reforms based on data can take frustratingly long to gain traction. That’s especially true when the harm done by the incumbent system is proportionately greater among students of color, for whom the larger culture offers no shortage of alternate explanations. But actual traction is starting to happen. We’re slowly and unevenly getting to the point where these topics aren’t automatic conversation-stoppers. That’s nowhere near enough, but it’s better than I’ve ever seen.
So thank you, John Hetts, for a well-timed reminder that we’re actually making headway. We are, we should, and frankly, we have to.