A few weeks ago I promised a piece on remedial levels. It’s a huge topic, and my own expertise is badly limited. That said...
Community colleges catch a lot of flak for teaching so many sections of remedial (the preferred term now is “developmental”) math and English. (For present purposes, I’ll sidestep the politically loaded question of whether ESL should be considered developmental.) In a perfect world, every student who gets here would have been prepared well in high school, and would arrive ready to tackle college-level work.
This is not a perfect world. And given the realities of the K-12 system, especially in low-income areas, I will not hold my breath for that.
Many four-year colleges and universities simply exclude the issue by having selective admissions. Swarthmore doesn’t worry itself overly much about developmental math; if you need a lot of help, you just don’t get in. But community colleges are open-admissions by mission; we don’t have the option to outsource the problem. We’re where the problem gets outsourced.
I was surprised, when I entered the cc world, to discover that course levels and pass rates are positively correlated; the ‘higher’ the course content, the higher the pass rate. Basic arithmetic -- the lowest level developmental math we teach -- has a lower pass rate than calculus. The same holds in English, if to a lesser degree.
At the League for Innovation conference a few weeks ago, some folks from the Community College Research Center presented some pretty compelling research that suggested several things. First, it found zero predictive validity in the placement tests that sentence students to developmental classes. Students who simply disregarded the placement and went directly into college-level courses did just as well as students who did as they were told. We’ve found something similar on my own campus. Last year, in an attempt to see if our “cut scores” were right, I asked the IR office and a math professor to see if there was a natural cliff in the placement test scores that would suggest the right levels for placing students into the various levels of developmental math. I had assumed that higher scores on the test would correlate with higher pass rates, and that the gently-slanting line would turn vertical at some discrete point. We could put the cutoff at that point, and thereby maximize the effectiveness of our program.
It didn’t work. Not only was there no discrete dropoff; there was no correlation at all between test scores and course performance. None. Zero. The placement test offered precisely zero predictive power.
Second, the CCRC found that the single strongest predictor of student success that’s actually under the college’s control -- so I’m ignoring gender and income of student, since we take all comers -- is length of sequence. The shorter the sequence, the better they do. The worst thing you can do, from a student success perspective, is to address perceived student deficits by adding more layers of remediation. If anything, you need to prune levels. Each new level provides a new ‘exit point’ -- the goal should be to minimize the exit points.
I’m excited about these findings, since they explain a few things and suggest an actual path for action.
Proprietary U did almost no remediation, despite recruiting a student body broadly comparable to a typical community college. At the time, I recall regarding that policy decision pretty cynically, especially since I had to teach some of those first semester students. Yet despite bringing in students who were palpably unprepared, it managed a graduation rate far higher than the nearby community colleges.
I’m beginning to think they were onto something.
This week I saw a webinar by Complete College America that made many of the same points, but that suggested a “co-requisite” strategy for developmental. In other words, it suggested having students take developmental English alongside English 101, and using the developmental class to address issues in 101 as they arise. It would require reconceiving the developmental classes as something closer to self-paced troubleshooting, but that may not be a bad thing. At least that way students will perceive a need for the material as they encounter it. It’s much easier to get student buy-in when the problem to solve is immediate. In a sense, it’s a variation on the ‘immersion’ approach to learning a language. You don’t learn a language by studying it in small chunks for a few hours a week. You learn a language by swimming in it. If the students need to learn math, let them swim in it; when they have what they need, let them get out of the pool.
I’ve had too many conversations with students who’ve told me earnestly that they don’t want to spend money and time on courses that “don’t count.” If they go in with a bad attitude, uninspired performance shouldn’t be surprising. Yes, extraordinary teacherly charisma can help, but I can’t scale that. Curricular change can scale.
This may seem pretty inside-baseball, but from the perspective of someone who’s tired of beating his head against the wall trying to improve student success rates without lowering standards, these findings offer real hope. It may be that the issue isn’t that we’re doing developmental wrong; the issue is that we’re doing it at all.
There’s real risk in moving away from an established pattern of doing things. As Galbraith noted fifty years ago, if you fail with the conventional approach, nobody holds it against you; if you fail with something novel, you’re considered an idiot. The “add-yet-another-level” model of developmental ed is well-established, with a legible logic of its own. But the failures of the existing model are just inexcusable. Assuming three levels of remediation with fifty percent pass rates at each -- which is pretty close to what we have -- only about 13 percent of the students who start at the lowest level will ever even reach the 101 level. An 87 percent dropout rate suggests that the argument for trying something different is pretty strong.
Wise and worldly readers, have you had experience with compressing or eliminating developmental levels? If so, did it work?