Friday, April 08, 2011
Community colleges catch a lot of flak for teaching so many sections of remedial (the preferred term now is “developmental”) math and English. (For present purposes, I’ll sidestep the politically loaded question of whether ESL should be considered developmental.) In a perfect world, every student who gets here would have been prepared well in high school, and would arrive ready to tackle college-level work.
This is not a perfect world. And given the realities of the K-12 system, especially in low-income areas, I will not hold my breath for that.
Many four-year colleges and universities simply exclude the issue by having selective admissions. Swarthmore doesn’t worry itself overly much about developmental math; if you need a lot of help, you just don’t get in. But community colleges are open-admissions by mission; we don’t have the option to outsource the problem. We’re where the problem gets outsourced.
I was surprised, when I entered the cc world, to discover that course levels and pass rates are positively correlated; the ‘higher’ the course content, the higher the pass rate. Basic arithmetic -- the lowest level developmental math we teach -- has a lower pass rate than calculus. The same holds in English, if to a lesser degree.
At the League for Innovation conference a few weeks ago, some folks from the Community College Research Center presented some pretty compelling research that suggested several things. First, it found zero predictive validity in the placement tests that sentence students to developmental classes. Students who simply disregarded the placement and went directly into college-level courses did just as well as students who did as they were told. We’ve found something similar on my own campus. Last year, in an attempt to see if our “cut scores” were right, I asked the IR office and a math professor to see if there was a natural cliff in the placement test scores that would suggest the right levels for placing students into the various levels of developmental math. I had assumed that higher scores on the test would correlate with higher pass rates, and that the gently-slanting line would turn vertical at some discrete point. We could put the cutoff at that point, and thereby maximize the effectiveness of our program.
It didn’t work. Not only was there no discrete dropoff; there was no correlation at all between test scores and course performance. None. Zero. The placement test offered precisely zero predictive power.
Second, the CCRC found that the single strongest predictor of student success that’s actually under the college’s control -- so I’m ignoring gender and income of student, since we take all comers -- is length of sequence. The shorter the sequence, the better they do. The worst thing you can do, from a student success perspective, is to address perceived student deficits by adding more layers of remediation. If anything, you need to prune levels. Each new level provides a new ‘exit point’ -- the goal should be to minimize the exit points.
I’m excited about these findings, since they explain a few things and suggest an actual path for action.
Proprietary U did almost no remediation, despite recruiting a student body broadly comparable to a typical community college. At the time, I recall regarding that policy decision pretty cynically, especially since I had to teach some of those first semester students. Yet despite bringing in students who were palpably unprepared, it managed a graduation rate far higher than the nearby community colleges.
I’m beginning to think they were onto something.
This week I saw a webinar by Complete College America that made many of the same points, but that suggested a “co-requisite” strategy for developmental. In other words, it suggested having students take developmental English alongside English 101, and using the developmental class to address issues in 101 as they arise. It would require reconceiving the developmental classes as something closer to self-paced troubleshooting, but that may not be a bad thing. At least that way students will perceive a need for the material as they encounter it. It’s much easier to get student buy-in when the problem to solve is immediate. In a sense, it’s a variation on the ‘immersion’ approach to learning a language. You don’t learn a language by studying it in small chunks for a few hours a week. You learn a language by swimming in it. If the students need to learn math, let them swim in it; when they have what they need, let them get out of the pool.
I’ve had too many conversations with students who’ve told me earnestly that they don’t want to spend money and time on courses that “don’t count.” If they go in with a bad attitude, uninspired performance shouldn’t be surprising. Yes, extraordinary teacherly charisma can help, but I can’t scale that. Curricular change can scale.
This may seem pretty inside-baseball, but from the perspective of someone who’s tired of beating his head against the wall trying to improve student success rates without lowering standards, these findings offer real hope. It may be that the issue isn’t that we’re doing developmental wrong; the issue is that we’re doing it at all.
There’s real risk in moving away from an established pattern of doing things. As Galbraith noted fifty years ago, if you fail with the conventional approach, nobody holds it against you; if you fail with something novel, you’re considered an idiot. The “add-yet-another-level” model of developmental ed is well-established, with a legible logic of its own. But the failures of the existing model are just inexcusable. Assuming three levels of remediation with fifty percent pass rates at each -- which is pretty close to what we have -- only about 13 percent of the students who start at the lowest level will ever even reach the 101 level. An 87 percent dropout rate suggests that the argument for trying something different is pretty strong.
Wise and worldly readers, have you had experience with compressing or eliminating developmental levels? If so, did it work?
In this stream, you re-took the classes you'd failed between weeks 1 and 7 of Semester 2 while concurrently taking the second semester's normal load of 'stuff you didn't fail'. If you passed these re-takes, you then were moved into the Winter-Spring Semester, where you took the streamed successors to the failed classes between Week 8-12 of Semester 2, and then an extra 5-6 weeks between May 1 and June 15.
So essentially, it was remedial in the sense that weaker students who proved they needed it were given a chance to improve. It was different than your method in that students were only placed into the J Program under two conditions:
1) They had failed (hard) in their first semester at uni, and they knew they needed it.
2) They decided to enter the Program, despite having passed Semester 1, because they weren't happy with how poorly they had done, and realized they needed it.
The students in the J Program tend to be hard workers, succeed quite well (over 90% of them pass the Program and move back into regular-stream second year programming), and almost universally finish their degrees. They're even perversely proud of having done it.
This goes somewhat hand-in-hand with your idea of pairing programs, so students realize why they need to do it. The students in J know exactly why they're there, and they've already proved they need (or want) it. Their motivation is high, their performance is exactly what you'd want, and they tend to succeed.
My personal stance is that a large part of why we need all this supposed remediation in first-year tertiary education is because student never *have* failed before. Failure teaches things that squeaking through doesn't -- if you pass a student repeatedly, they'll never learn that just barely squeaking by isn't acceptable in the real world, and will likely get you fired. Unless you're in a union. Wups, did I say that? :)
I'm curious, because here the school has started open admissions, yet the lecturers are rated by their pass rates, so getting a load of developmental students means either less money (the old economic incentive thing), lowered standards, or a but-load of (unpaid) extra work.
I'm just curious what, exactly, "remedial" means.
We have the paired math class now, which we recommend to students as soon as they bog down in their current math class. It works pretty well, and we are planning on moving that free-form course into a lab setting where students can enter, be assessed, and take whatever they need to progress as quickly as possible. The instructor will then recommend the next class for each student.
I am hopeful that this lab arrangement will replace our standard progression outlined above.
Looking at other universities, though, we've found some schools who've made developmental composition a part of the core and moved the second-level comp. course to being taught by individual disciplines (and thereby keeping the time-honored two comp. core). I'm not sure how I feel about that yet.
I taught one of these courses for several years and many of that 'extra 3' component did turn out to be remedial in some sense, but as this work was based on actual course work and everyone took these courses there was no stigma attached to a) taking the class or b) doing the 'extra' work. We were usually able to do quite individual work-plans with students because of the extra TAs and extended tutorial times so the 'remedial' aspect helped every student at any level improve in ways that were measurable to both students and instructors. The courses had a low drop-out rate and almost every student came out a better writer and a more critical thinker than when they went in. I thought it worked brilliantly, and so did the students based on the evaluations we received.
How far up the math chain did your analysis go? Did you have a statistically significant number of students taking college algebra (logarithms and inverse functions) with placement scores for arithmetic, or did you only go up to intermediate algebra (quadratic equations)?
Do you block kids who can't do fractions but CAN do basic algebra, or do you ignore the arithmetic score?
My own opinion is that the effect you see is because those the higher level classes are taught based on the correct assumption that the students have forgotten everything they "learned" in the prerequisite course. At my college, the catalog description of the various classes below college algebra are remarkably similar. They appear to assume nothing is retained from one week to the next.
I'm for dropping as many levels of remediation as possible, but I really like the idea of a co-requisite mostly self-paced remediation/study skills course for struggling students (I'd suggest getting below a B in the previous course in sequence, repeating the current course, or self-assigned might be good ways to select students for the course, but that becomes an IR question after you try a few things). I'd really suggest offering one for every class through whatever is the common exit point for non math majors at your school (business calculus?). This gives students who lack prerequisite skills a specific time and place to learn them in the context of the material they're supposed to be learning now, and if they can see that learning in the form of higher grades in the class they're trying to pass they'll value it more.
I didn't notice much difference in the immediate pass rates between when I taught pre-algebra and when taught algebra to similar populations, personally. It wouldn't surprise me if the same students who passed my pre-algebra course would have passed algebra I that year, and the same students who failed my algebra I course would have failed pre-algebra.
There're a few exceptions for students who genuinely haven't seen the material before (students who just came over as refugees from Somalia and haven't been in school for years come to mind) and would benefit from a compressed remediation sequence since their problems are with lack of exposure rather than comprehension or study habits. You probably get some students of this kind at a CC as well and should probably have some kind of remediation program in place for them that's different from throwing them in college math with a co-requisite. If you already have a solid GED program, that probably meets that need for you though. I just had a few students in Algebra I in that situation and I found it really frustrating because it was so clearly the wrong place for them as they really may never have been exposed to any formal math beyond basic arithmetic and were trying to cram years of math at once.
1. They failed any section of the subject matter test given early in the first term,
2. They failed any midterm in 1A
3. They failed any course in 1A
4. They failed any midterm in 1B
These classes were basically small group tutorials to which you could bring any coursework you were having trouble with. It was optional to go, but highly encouraged. If you brought your marks up in the areas you'd been flagged for, you were "released" from the tutorial.
Admin did a great job of destigmatizing the program. That job was helped, I think, by the fact that so many students floated in and out of the program over first year. Not sure how it would work for more severe remedial needs, but it worked for us.
Does anybody know if the Community College Research Center's findings are available online anywhere? Or if there are any similar (non-anecdotal) results available in print rather than as a conference presentation?
There may still be some cross-over in terms of approaches and Just in Time Teaching of math skills, but I do think the needs and outlooks of the target populations are important factors, too.
As a former academic mathematician, let me assure you that there are plenty of STEM majors--especially mathematics education majors--that are "totally and completely resistant to numeracy at even a basic level".
They will tell me. I don't like math so I don't do my homework.
So maybe they need to be allowed to go to a college level course if they choose and swim hard - or if they sink, then they must take developmental. They just don't believe you when you tell them they are not ready. They need to fail to know that.
Lindsay Rosenwald http://www.lindsayrosenwald.info/ Dr. Lindsay Rosenwald is one of the re-known venture capitalists and the hedge fund managers in the world.
Of course it doesn't. It is a placement test not a predictive one. The placement test merely indicates achievement up to that moment. It is not supposed to predict!
As for the people who self-place, first, they have a motivation to move ahead so they probably will do better in the course. Second, cut-scores are misleading because of the standard error of measurement (SEM). What is needed is a band of scores that take into the plus or minus of up to 7 points. In other words, if someone scores an 84 on the Accuplacer, the score is actually in the range of 78 to 90.
Finally, using only one placement instrument makes most placement invalid. There needs to be at least two different ones, such as analyzing GPA and class ranking besides a placement test such as Accuplacer or COMPASS. It is much more complicated than Community College Research Center makes it out to be.
If you want to learn more, read Ed Morante's A Primer on Placement Testing in ERIC. Also read Hunter Boylan's piece on Inside Higher Ed titled "Knee-jerk Reforms on Remediation" to see how statistics are being used as disservice to the field of Developmental Education and the students who need it.