### Friday, April 08, 2011

## Remedial Levels

A few weeks ago I promised a piece on remedial levels. It’s a huge topic, and my own expertise is badly limited. That said...

Community colleges catch a lot of flak for teaching so many sections of remedial (the preferred term now is “developmental”) math and English. (For present purposes, I’ll sidestep the politically loaded question of whether ESL should be considered developmental.) In a perfect world, every student who gets here would have been prepared well in high school, and would arrive ready to tackle college-level work.

This is not a perfect world. And given the realities of the K-12 system, especially in low-income areas, I will not hold my breath for that.

Many four-year colleges and universities simply exclude the issue by having selective admissions. Swarthmore doesn’t worry itself overly much about developmental math; if you need a lot of help, you just don’t get in. But community colleges are open-admissions by mission; we don’t have the option to outsource the problem. We’re where the problem gets outsourced.

I was surprised, when I entered the cc world, to discover that course levels and pass rates are positively correlated; the ‘higher’ the course content, the higher the pass rate. Basic arithmetic -- the lowest level developmental math we teach -- has a lower pass rate than calculus. The same holds in English, if to a lesser degree.

At the League for Innovation conference a few weeks ago, some folks from the Community College Research Center presented some pretty compelling research that suggested several things. First, it found zero predictive validity in the placement tests that sentence students to developmental classes. Students who simply disregarded the placement and went directly into college-level courses did just as well as students who did as they were told. We’ve found something similar on my own campus. Last year, in an attempt to see if our “cut scores” were right, I asked the IR office and a math professor to see if there was a natural cliff in the placement test scores that would suggest the right levels for placing students into the various levels of developmental math. I had assumed that higher scores on the test would correlate with higher pass rates, and that the gently-slanting line would turn vertical at some discrete point. We could put the cutoff at that point, and thereby maximize the effectiveness of our program.

It didn’t work. Not only was there no discrete dropoff; there was no correlation at all between test scores and course performance. None. Zero. The placement test offered precisely zero predictive power.

Second, the CCRC found that the single strongest predictor of student success that’s actually under the college’s control -- so I’m ignoring gender and income of student, since we take all comers -- is length of sequence. The shorter the sequence, the better they do. The worst thing you can do, from a student success perspective, is to address perceived student deficits by adding more layers of remediation. If anything, you need to prune levels. Each new level provides a new ‘exit point’ -- the goal should be to minimize the exit points.

I’m excited about these findings, since they explain a few things and suggest an actual path for action.

Proprietary U did almost no remediation, despite recruiting a student body broadly comparable to a typical community college. At the time, I recall regarding that policy decision pretty cynically, especially since I had to teach some of those first semester students. Yet despite bringing in students who were palpably unprepared, it managed a graduation rate far higher than the nearby community colleges.

I’m beginning to think they were onto something.

This week I saw a webinar by Complete College America that made many of the same points, but that suggested a “co-requisite” strategy for developmental. In other words, it suggested having students take developmental English alongside English 101, and using the developmental class to address issues in 101 as they arise. It would require reconceiving the developmental classes as something closer to self-paced troubleshooting, but that may not be a bad thing. At least that way students will perceive a need for the material as they encounter it. It’s much easier to get student buy-in when the problem to solve is immediate. In a sense, it’s a variation on the ‘immersion’ approach to learning a language. You don’t learn a language by studying it in small chunks for a few hours a week. You learn a language by swimming in it. If the students need to learn math, let them swim in it; when they have what they need, let them get out of the pool.

I’ve had too many conversations with students who’ve told me earnestly that they don’t want to spend money and time on courses that “don’t count.” If they go in with a bad attitude, uninspired performance shouldn’t be surprising. Yes, extraordinary teacherly charisma can help, but I can’t scale that. Curricular change can scale.

This may seem pretty inside-baseball, but from the perspective of someone who’s tired of beating his head against the wall trying to improve student success rates without lowering standards, these findings offer real hope. It may be that the issue isn’t that we’re doing developmental wrong; the issue is that we’re doing it at all.

There’s real risk in moving away from an established pattern of doing things. As Galbraith noted fifty years ago, if you fail with the conventional approach, nobody holds it against you; if you fail with something novel, you’re considered an idiot. The “add-yet-another-level” model of developmental ed is well-established, with a legible logic of its own. But the failures of the existing model are just inexcusable. Assuming three levels of remediation with fifty percent pass rates at each -- which is pretty close to what we have -- only about 13 percent of the students who start at the lowest level will ever even reach the 101 level. An 87 percent dropout rate suggests that the argument for trying something different is pretty strong.

Wise and worldly readers, have you had experience with compressing or eliminating developmental levels? If so, did it work?

Community colleges catch a lot of flak for teaching so many sections of remedial (the preferred term now is “developmental”) math and English. (For present purposes, I’ll sidestep the politically loaded question of whether ESL should be considered developmental.) In a perfect world, every student who gets here would have been prepared well in high school, and would arrive ready to tackle college-level work.

This is not a perfect world. And given the realities of the K-12 system, especially in low-income areas, I will not hold my breath for that.

Many four-year colleges and universities simply exclude the issue by having selective admissions. Swarthmore doesn’t worry itself overly much about developmental math; if you need a lot of help, you just don’t get in. But community colleges are open-admissions by mission; we don’t have the option to outsource the problem. We’re where the problem gets outsourced.

I was surprised, when I entered the cc world, to discover that course levels and pass rates are positively correlated; the ‘higher’ the course content, the higher the pass rate. Basic arithmetic -- the lowest level developmental math we teach -- has a lower pass rate than calculus. The same holds in English, if to a lesser degree.

At the League for Innovation conference a few weeks ago, some folks from the Community College Research Center presented some pretty compelling research that suggested several things. First, it found zero predictive validity in the placement tests that sentence students to developmental classes. Students who simply disregarded the placement and went directly into college-level courses did just as well as students who did as they were told. We’ve found something similar on my own campus. Last year, in an attempt to see if our “cut scores” were right, I asked the IR office and a math professor to see if there was a natural cliff in the placement test scores that would suggest the right levels for placing students into the various levels of developmental math. I had assumed that higher scores on the test would correlate with higher pass rates, and that the gently-slanting line would turn vertical at some discrete point. We could put the cutoff at that point, and thereby maximize the effectiveness of our program.

It didn’t work. Not only was there no discrete dropoff; there was no correlation at all between test scores and course performance. None. Zero. The placement test offered precisely zero predictive power.

Second, the CCRC found that the single strongest predictor of student success that’s actually under the college’s control -- so I’m ignoring gender and income of student, since we take all comers -- is length of sequence. The shorter the sequence, the better they do. The worst thing you can do, from a student success perspective, is to address perceived student deficits by adding more layers of remediation. If anything, you need to prune levels. Each new level provides a new ‘exit point’ -- the goal should be to minimize the exit points.

I’m excited about these findings, since they explain a few things and suggest an actual path for action.

Proprietary U did almost no remediation, despite recruiting a student body broadly comparable to a typical community college. At the time, I recall regarding that policy decision pretty cynically, especially since I had to teach some of those first semester students. Yet despite bringing in students who were palpably unprepared, it managed a graduation rate far higher than the nearby community colleges.

I’m beginning to think they were onto something.

This week I saw a webinar by Complete College America that made many of the same points, but that suggested a “co-requisite” strategy for developmental. In other words, it suggested having students take developmental English alongside English 101, and using the developmental class to address issues in 101 as they arise. It would require reconceiving the developmental classes as something closer to self-paced troubleshooting, but that may not be a bad thing. At least that way students will perceive a need for the material as they encounter it. It’s much easier to get student buy-in when the problem to solve is immediate. In a sense, it’s a variation on the ‘immersion’ approach to learning a language. You don’t learn a language by studying it in small chunks for a few hours a week. You learn a language by swimming in it. If the students need to learn math, let them swim in it; when they have what they need, let them get out of the pool.

I’ve had too many conversations with students who’ve told me earnestly that they don’t want to spend money and time on courses that “don’t count.” If they go in with a bad attitude, uninspired performance shouldn’t be surprising. Yes, extraordinary teacherly charisma can help, but I can’t scale that. Curricular change can scale.

This may seem pretty inside-baseball, but from the perspective of someone who’s tired of beating his head against the wall trying to improve student success rates without lowering standards, these findings offer real hope. It may be that the issue isn’t that we’re doing developmental wrong; the issue is that we’re doing it at all.

There’s real risk in moving away from an established pattern of doing things. As Galbraith noted fifty years ago, if you fail with the conventional approach, nobody holds it against you; if you fail with something novel, you’re considered an idiot. The “add-yet-another-level” model of developmental ed is well-established, with a legible logic of its own. But the failures of the existing model are just inexcusable. Assuming three levels of remediation with fifty percent pass rates at each -- which is pretty close to what we have -- only about 13 percent of the students who start at the lowest level will ever even reach the 101 level. An 87 percent dropout rate suggests that the argument for trying something different is pretty strong.

Wise and worldly readers, have you had experience with compressing or eliminating developmental levels? If so, did it work?

Comments:

<< Home

The math department at my CC is going to try something new in the fall. They are going to 'pair' two of the developmental math classes in a sequence, and if a student in the lower of the two manages to successfully complete all modules of MyMathLab within the first three weeks, they can move to the higher class of the pair (held during the same time block). This is in response to what you mention--that students sometimes wind up in the incorrect level math class based on the results of the placement exam.

It's not quite what you're doing, but my undergrad U did something 'remedial' for first-year engineering. The rule was as follows: if you failed 3 or more of the first-semester 'core credits', you were automatically moved into the "J Stream".

In this stream, you re-took the classes you'd failed between weeks 1 and 7 of Semester 2 while concurrently taking the second semester's normal load of 'stuff you didn't fail'. If you passed these re-takes, you then were moved into the Winter-Spring Semester, where you took the streamed successors to the failed classes between Week 8-12 of Semester 2, and then an extra 5-6 weeks between May 1 and June 15.

So essentially, it was remedial in the sense that weaker students who proved they needed it were given a chance to improve. It was different than your method in that students were only placed into the J Program under two conditions:

1) They had failed (hard) in their first semester at uni, and they knew they needed it.

2) They decided to enter the Program, despite having passed Semester 1, because they weren't happy with how poorly they had done, and realized they needed it.

The students in the J Program tend to be hard workers, succeed quite well (over 90% of them pass the Program and move back into regular-stream second year programming), and almost universally finish their degrees. They're even perversely proud of having done it.

This goes somewhat hand-in-hand with your idea of pairing programs, so students realize why they need to do it. The students in J know exactly why they're there, and they've already proved they need (or want) it. Their motivation is high, their performance is exactly what you'd want, and they tend to succeed.

My personal stance is that a large part of why we need all this supposed remediation in first-year tertiary education is because student never *have* failed before. Failure teaches things that squeaking through doesn't -- if you pass a student repeatedly, they'll never learn that just barely squeaking by isn't acceptable in the real world, and will likely get you fired. Unless you're in a union. Wups, did I say that? :)

In this stream, you re-took the classes you'd failed between weeks 1 and 7 of Semester 2 while concurrently taking the second semester's normal load of 'stuff you didn't fail'. If you passed these re-takes, you then were moved into the Winter-Spring Semester, where you took the streamed successors to the failed classes between Week 8-12 of Semester 2, and then an extra 5-6 weeks between May 1 and June 15.

So essentially, it was remedial in the sense that weaker students who proved they needed it were given a chance to improve. It was different than your method in that students were only placed into the J Program under two conditions:

1) They had failed (hard) in their first semester at uni, and they knew they needed it.

2) They decided to enter the Program, despite having passed Semester 1, because they weren't happy with how poorly they had done, and realized they needed it.

The students in the J Program tend to be hard workers, succeed quite well (over 90% of them pass the Program and move back into regular-stream second year programming), and almost universally finish their degrees. They're even perversely proud of having done it.

This goes somewhat hand-in-hand with your idea of pairing programs, so students realize why they need to do it. The students in J know exactly why they're there, and they've already proved they need (or want) it. Their motivation is high, their performance is exactly what you'd want, and they tend to succeed.

My personal stance is that a large part of why we need all this supposed remediation in first-year tertiary education is because student never *have* failed before. Failure teaches things that squeaking through doesn't -- if you pass a student repeatedly, they'll never learn that just barely squeaking by isn't acceptable in the real world, and will likely get you fired. Unless you're in a union. Wups, did I say that? :)

If you let students in to classes without the basic skills, on a 'sink or swim' basis, are you expecting the faculty to provide extra help? Are you rating faculty on pass rates? Will you accept 'didn't have basic skills' as a reason for failure, or will you expect the prof to tutor the student?

I'm curious, because here the school has started open admissions, yet the lecturers are rated by their pass rates, so getting a load of developmental students means either less money (the old economic incentive thing), lowered standards, or a but-load of (unpaid) extra work.

I'm curious, because here the school has started open admissions, yet the lecturers are rated by their pass rates, so getting a load of developmental students means either less money (the old economic incentive thing), lowered standards, or a but-load of (unpaid) extra work.

Stupid question, but at what level is math considered really at the college level? I was an engineer as an undergrad, and our normal sequences started at calculus. For us, even "college algebra" didn't count.

I'm just curious what, exactly, "remedial" means.

I'm just curious what, exactly, "remedial" means.

Our state system has mandated the elimination of the lower level of our developmental English and reading sequences beginning fall '12. Placement cutoffs will be adjusted, of course, and all students who place below 1101 will automatically go into the one developmental course. It'll be interesting. Most of us are in an uproar, predicting greater fail rates, but your post gives me a bit of hope. Curiously, math will retain its current two-course dev. sequence.

I teach developmental writing at a college, and I have found that I don't always agree with mandatory placement. (Although I can't say that high school GPA's are a good measure -- I have way too many "A" students who failed their first semester of college work). Sometimes I go to the League of Innovations; I wish I would have seen this presentation. I mean, how do we know that placement works if we don't have a control group?

For most colleges, developmental math is anything below College Algebra. That would tend to include a sequence of Arithmetic - Pre-Algebra - Elementary Algebra - Intermediate Algebra. As DD notes, it can be terribly daunting to students who place into Arithmetic to realize that they have to take 4 courses to get to the bottom of the college-level coursework.

We have the paired math class now, which we recommend to students as soon as they bog down in their current math class. It works pretty well, and we are planning on moving that free-form course into a lab setting where students can enter, be assessed, and take whatever they need to progress as quickly as possible. The instructor will then recommend the next class for each student.

I am hopeful that this lab arrangement will replace our standard progression outlined above.

We have the paired math class now, which we recommend to students as soon as they bog down in their current math class. It works pretty well, and we are planning on moving that free-form course into a lab setting where students can enter, be assessed, and take whatever they need to progress as quickly as possible. The instructor will then recommend the next class for each student.

I am hopeful that this lab arrangement will replace our standard progression outlined above.

Our Math program did much the same thing as your first commenter mentioned (to fair success, I believe).

Looking at other universities, though, we've found some schools who've made developmental composition a part of the core and moved the second-level comp. course to being taught by individual disciplines (and thereby keeping the time-honored two comp. core). I'm not sure how I feel about that yet.

Looking at other universities, though, we've found some schools who've made developmental composition a part of the core and moved the second-level comp. course to being taught by individual disciplines (and thereby keeping the time-honored two comp. core). I'm not sure how I feel about that yet.

At one of the Canadian universities I taught at they had 'foundations' courses in the Humanities that were 9 credits over a year as opposed to 6. Those extra 3 credits were supposed to be for addressing non-content issues like writing skills, reading comprehension, critical thinking skills etc. The courses consisted of 3 hours of lecture and two hours of tutorials (1 TA for 25 students) per week.

I taught one of these courses for several years and many of that 'extra 3' component did turn out to be remedial in some sense, but as this work was based on actual course work and everyone took these courses there was no stigma attached to a) taking the class or b) doing the 'extra' work. We were usually able to do quite individual work-plans with students because of the extra TAs and extended tutorial times so the 'remedial' aspect helped every student at any level improve in ways that were measurable to both students and instructors. The courses had a low drop-out rate and almost every student came out a better writer and a more critical thinker than when they went in. I thought it worked brilliantly, and so did the students based on the evaluations we received.

I taught one of these courses for several years and many of that 'extra 3' component did turn out to be remedial in some sense, but as this work was based on actual course work and everyone took these courses there was no stigma attached to a) taking the class or b) doing the 'extra' work. We were usually able to do quite individual work-plans with students because of the extra TAs and extended tutorial times so the 'remedial' aspect helped every student at any level improve in ways that were measurable to both students and instructors. The courses had a low drop-out rate and almost every student came out a better writer and a more critical thinker than when they went in. I thought it worked brilliantly, and so did the students based on the evaluations we received.

Ironically, my college is moving the opposite way, putting in place an absolute requirement that students in our first core class pass all remedial math and english BEFORE they enter the core. This typically adds a year on to their studies. But we had 50% fail rates with our students in that first year core and this has dropped that rate to 25%. Not great but better than it was.

Question:

How far up the math chain did your analysis go? Did you have a statistically significant number of students taking college algebra (logarithms and inverse functions) with placement scores for arithmetic, or did you only go up to intermediate algebra (quadratic equations)?

Second question:

Do you block kids who can't do fractions but CAN do basic algebra, or do you ignore the arithmetic score?

My own opinion is that the effect you see is because those the higher level classes are taught based on the correct assumption that the students have forgotten everything they "learned" in the prerequisite course. At my college, the catalog description of the various classes below college algebra are remarkably similar. They appear to assume nothing is retained from one week to the next.

How far up the math chain did your analysis go? Did you have a statistically significant number of students taking college algebra (logarithms and inverse functions) with placement scores for arithmetic, or did you only go up to intermediate algebra (quadratic equations)?

Second question:

Do you block kids who can't do fractions but CAN do basic algebra, or do you ignore the arithmetic score?

My own opinion is that the effect you see is because those the higher level classes are taught based on the correct assumption that the students have forgotten everything they "learned" in the prerequisite course. At my college, the catalog description of the various classes below college algebra are remarkably similar. They appear to assume nothing is retained from one week to the next.

I haven't taught at a CC, but have taught math at several high schools. During that time, I've taught General Math and Pre-Algebra at a school that felt they needed 2 years of remediation needed below Algebra I, and I've taught Algebra I at a school that decided to offer no math below that level.

I'm for dropping as many levels of remediation as possible, but I really like the idea of a co-requisite mostly self-paced remediation/study skills course for struggling students (I'd suggest getting below a B in the previous course in sequence, repeating the current course, or self-assigned might be good ways to select students for the course, but that becomes an IR question after you try a few things). I'd really suggest offering one for every class through whatever is the common exit point for non math majors at your school (business calculus?). This gives students who lack prerequisite skills a specific time and place to learn them in the context of the material they're supposed to be learning now, and if they can see that learning in the form of higher grades in the class they're trying to pass they'll value it more.

I didn't notice much difference in the immediate pass rates between when I taught pre-algebra and when taught algebra to similar populations, personally. It wouldn't surprise me if the same students who passed my pre-algebra course would have passed algebra I that year, and the same students who failed my algebra I course would have failed pre-algebra.

There're a few exceptions for students who genuinely haven't seen the material before (students who just came over as refugees from Somalia and haven't been in school for years come to mind) and would benefit from a compressed remediation sequence since their problems are with lack of exposure rather than comprehension or study habits. You probably get some students of this kind at a CC as well and should probably have some kind of remediation program in place for them that's different from throwing them in college math with a co-requisite. If you already have a solid GED program, that probably meets that need for you though. I just had a few students in Algebra I in that situation and I found it really frustrating because it was so clearly the wrong place for them as they really may never have been exposed to any formal math beyond basic arithmetic and were trying to cram years of math at once.

I'm for dropping as many levels of remediation as possible, but I really like the idea of a co-requisite mostly self-paced remediation/study skills course for struggling students (I'd suggest getting below a B in the previous course in sequence, repeating the current course, or self-assigned might be good ways to select students for the course, but that becomes an IR question after you try a few things). I'd really suggest offering one for every class through whatever is the common exit point for non math majors at your school (business calculus?). This gives students who lack prerequisite skills a specific time and place to learn them in the context of the material they're supposed to be learning now, and if they can see that learning in the form of higher grades in the class they're trying to pass they'll value it more.

I didn't notice much difference in the immediate pass rates between when I taught pre-algebra and when taught algebra to similar populations, personally. It wouldn't surprise me if the same students who passed my pre-algebra course would have passed algebra I that year, and the same students who failed my algebra I course would have failed pre-algebra.

There're a few exceptions for students who genuinely haven't seen the material before (students who just came over as refugees from Somalia and haven't been in school for years come to mind) and would benefit from a compressed remediation sequence since their problems are with lack of exposure rather than comprehension or study habits. You probably get some students of this kind at a CC as well and should probably have some kind of remediation program in place for them that's different from throwing them in college math with a co-requisite. If you already have a solid GED program, that probably meets that need for you though. I just had a few students in Algebra I in that situation and I found it really frustrating because it was so clearly the wrong place for them as they really may never have been exposed to any formal math beyond basic arithmetic and were trying to cram years of math at once.

My undergrad Engineering program (which required completion of university stream math x2, physics, and chemistry) had a non-credit remedial program both terms of first year to which students were invited if:

1. They failed any section of the subject matter test given early in the first term,

2. They failed any midterm in 1A

3. They failed any course in 1A

4. They failed any midterm in 1B

These classes were basically small group tutorials to which you could bring any coursework you were having trouble with. It was optional to go, but highly encouraged. If you brought your marks up in the areas you'd been flagged for, you were "released" from the tutorial.

Admin did a great job of destigmatizing the program. That job was helped, I think, by the fact that so many students floated in and out of the program over first year. Not sure how it would work for more severe remedial needs, but it worked for us.

1. They failed any section of the subject matter test given early in the first term,

2. They failed any midterm in 1A

3. They failed any course in 1A

4. They failed any midterm in 1B

These classes were basically small group tutorials to which you could bring any coursework you were having trouble with. It was optional to go, but highly encouraged. If you brought your marks up in the areas you'd been flagged for, you were "released" from the tutorial.

Admin did a great job of destigmatizing the program. That job was helped, I think, by the fact that so many students floated in and out of the program over first year. Not sure how it would work for more severe remedial needs, but it worked for us.

I'm intrigued by the research cited here indicating that placement testing has no predictive value. That's been my suspicion for some time, and it's good to know that somebody has the numbers to prove it.

Does anybody know if the Community College Research Center's findings are available online anywhere? Or if there are any similar (non-anecdotal) results available in print rather than as a conference presentation?

Does anybody know if the Community College Research Center's findings are available online anywhere? Or if there are any similar (non-anecdotal) results available in print rather than as a conference presentation?

I read this through the lens of K-12 (I'm an elementary school teacher). As a result, I'm wondering if a piece of the problem is that we set too low of expectations for students. If a student is not being challenged in a class, do they simple give up because they are bored? Do we then continue to simplify things under the assumption that they are struggling with being able to achieve? Should we, from a much younger age, be offering students more challenge rather than more remediation?

I'm curious what is meant by the assertion that placement tests have no predictive value. Literally none? Does that mean that students who score in the bottom 5% of these tests do as well as those who score in the top 5%, if they end up in the same class?

Some of you have written about successes with remedial programs in STEM fields, and that is exciting to read about. I think, though, that remediation may be different when dealing with students who are not in STEM fields. It is one thing for students who fully recognize (embrace, even) the need to be math-literate to go through a remediation program, and another for students who are totally and completely resistant to numeracy at even a basic level, to do remediation. Not only is the basic math level probably much higher among STEM students (even the weakest of them), but the motivation is probably also much different.

There may still be some cross-over in terms of approaches and Just in Time Teaching of math skills, but I do think the needs and outlooks of the target populations are important factors, too.

There may still be some cross-over in terms of approaches and Just in Time Teaching of math skills, but I do think the needs and outlooks of the target populations are important factors, too.

@Anonymous 7:15PM,

As a former academic mathematician, let me assure you that there are plenty of STEM majors--especially mathematics education majors--that are "totally and completely resistant to numeracy at even a basic level".

As a former academic mathematician, let me assure you that there are plenty of STEM majors--especially mathematics education majors--that are "totally and completely resistant to numeracy at even a basic level".

All of the CCRC papers with tons of insightful studies are on their website at http://ccrc.tc.columbia.edu/ If you care about CC issues and haven't read all of their Assessment of Evidence series papers, add all of them to your reading list. Excellent work there.

As a teacher of developmental math, the basic problem is that students do not do the work and do not show up for class. And it is the 18 year old kids.

They will tell me. I don't like math so I don't do my homework.

So maybe they need to be allowed to go to a college level course if they choose and swim hard - or if they sink, then they must take developmental. They just don't believe you when you tell them they are not ready. They need to fail to know that.

They will tell me. I don't like math so I don't do my homework.

So maybe they need to be allowed to go to a college level course if they choose and swim hard - or if they sink, then they must take developmental. They just don't believe you when you tell them they are not ready. They need to fail to know that.

Nice blogging, My review is very good example.

Lindsay Rosenwald http://www.lindsayrosenwald.info/ Dr. Lindsay Rosenwald is one of the re-known venture capitalists and the hedge fund managers in the world.

Lindsay Rosenwald http://www.lindsayrosenwald.info/ Dr. Lindsay Rosenwald is one of the re-known venture capitalists and the hedge fund managers in the world.

Re: "The placement test offered precisely zero predictive power."

Of course it doesn't. It is a placement test not a predictive one. The placement test merely indicates achievement up to that moment. It is not supposed to predict!

As for the people who self-place, first, they have a motivation to move ahead so they probably will do better in the course. Second, cut-scores are misleading because of the standard error of measurement (SEM). What is needed is a band of scores that take into the plus or minus of up to 7 points. In other words, if someone scores an 84 on the Accuplacer, the score is actually in the range of 78 to 90.

Finally, using only one placement instrument makes most placement invalid. There needs to be at least two different ones, such as analyzing GPA and class ranking besides a placement test such as Accuplacer or COMPASS. It is much more complicated than Community College Research Center makes it out to be.

If you want to learn more, read Ed Morante's A Primer on Placement Testing in ERIC. Also read Hunter Boylan's piece on Inside Higher Ed titled "Knee-jerk Reforms on Remediation" to see how statistics are being used as disservice to the field of Developmental Education and the students who need it.

Of course it doesn't. It is a placement test not a predictive one. The placement test merely indicates achievement up to that moment. It is not supposed to predict!

As for the people who self-place, first, they have a motivation to move ahead so they probably will do better in the course. Second, cut-scores are misleading because of the standard error of measurement (SEM). What is needed is a band of scores that take into the plus or minus of up to 7 points. In other words, if someone scores an 84 on the Accuplacer, the score is actually in the range of 78 to 90.

Finally, using only one placement instrument makes most placement invalid. There needs to be at least two different ones, such as analyzing GPA and class ranking besides a placement test such as Accuplacer or COMPASS. It is much more complicated than Community College Research Center makes it out to be.

If you want to learn more, read Ed Morante's A Primer on Placement Testing in ERIC. Also read Hunter Boylan's piece on Inside Higher Ed titled "Knee-jerk Reforms on Remediation" to see how statistics are being used as disservice to the field of Developmental Education and the students who need it.

I had to do remedial math when I started college because I refused to learn it in HS. But I was motivated and went from pre-algebra to pre-calculus in 2 years. I was not enjoying it then I taught remedial writing course. It was awesome and enjoyed it.

printable ged practice test free preparatory packages are especially needed by those who have had problems with their arithmetic during schooling. Passing the GED test is akin to being awarded a high school diploma. This means that in order to pass the GED Math test, the review entails a full coverage of basic number operations, fractions, decimals, percents and proportions, for starters. And from there the rest in an uphill journey to geometry and calculus.

Post a Comment
<< Home