Sunday, April 22, 2012

 

Placing Thousands of Students Quickly

How do you know if a student needs remediation?

It isn’t as straightforward as it sounds.  

Most community colleges require entering students to take standardized placement tests in math and English.  If the student scores below a certain cutoff, s/he is shunted into developmental courses.  Depending on how far below the cutoff she scores, she may be shunted into two or three semesters’ worth of developmental coursework, nearly doubling the anticipated time to graduation.  (Of course, graduation rates of students who start out at the lowest levels of developmental coursework are far lower than students who don’t.)  

In many cases, the students are referred directly to testing upon initial admission, with no opportunity to review for the test.  My college, like many, uses a test mandated by the state, so we don’t have the option of changing it or disregarding it.  

It’s frustrating.  Last year, when we started looking at restructuring (shortening) the developmental math sequence, one math professor here looked at student performance in existing developmental classes and compared it with placement test results.  He found no correlation.  In other words, the test scores offered absolutely no predictive value.

Yet we’re still required to use them.

It’s easy to condemn placement tests.  They carry all of the flaws of any high-stakes standardized test, and they don’t even help in the aggregate.  

But condemning the tests doesn’t solve the underlying problem.  When you have thousands of new students showing up in a compressed timeframe, ranging in age from fresh out of high school to retirement, and you need to place them all quickly, what do you do?

Small, selective places have the option of doing granular reviews of high school grades, and/or of simply turning away students who aren’t prepared to jump right in to college level math.  That’s fine for them, but it doesn’t work for a larger, open-admissions setting.  We don’t have the staffing to do that, and even if we did, it’s not clear that it would make sense for older students.  (I last took advanced math in the 1980’s.  Drop an exam from that class in front of me now, cold, and I wouldn’t have a clue what to do.)  

Alternately, we could allow students to select their own classes.  (There are times when I lean this way myself.)  The danger there, though, is that students will badly overestimate their own abilities and quickly wash out of college-level classes.  In the meantime, though, they will have taken seats that could have gone to students who might have succeeded.  The libertarian ideal of “let them fail” falsely assumes that the cost of failure accrues only to the student; unfortunately, the student who took up a seat deprived another student of that seat.  Given a scarcity of seats, we have a responsibility to allocate them as wisely as we can.  (One could also argue that “let them fail” represents a waste of financial aid, which is largely tax-funded.)  

There’s also the annoying political reality that “let them fail” would lead, in the short term, to even higher attrition rates.  In an era in which attrition is assumed to be the college’s fault, that would amount to institutional suicide.

In the short term, the easiest and most prudent approach is probably the small-bore solution of finding a test that actually tells you something, preferably with students getting an opportunity to review ahead of time.  The more radical solutions of embedded remediation or just letting them fail would either take years to develop, as in the former, or require a political sea change, as in the latter.  

Is there a better way?  Wise and worldly readers, have you seen (or come up with) a reasonably fast and efficient way to place thousands of students at the right level in a short time?

Comments:
The Carnegie Foundation for the Advancement of Teaching has two accelerated math remediation approaches it is developing and testing with community colleges in an "network improvement community". Take a look at the web site: http://www.carnegiefoundation.org/quantway
 
Fast and efficient is what you have now! You mean fast and effective, to which my answer is "no". However, it is far from clear why the answer is no, and this comes from a college that has looked at this really hard.

I'm glad to see you have done some internal data analysis, which was to be my suggestion, so I have to ask if your state based its "cut" scores on research like that or just made them up. A similar question applies to students passing the next class when (A) placed into it or (B) passing the developmental class. That would be where HS transcripts might come into play if you have them electronically so they can be part of your data analysis. It would be great if you could believe the grades based on a standard "Regents"-lite type of final exam.

I say this because: The only plural of anecdote I can offer is that the actual name of a class appears to mean nothing in my state. I've advised students who were currently taking (and, reportedly passing) a class called "pre-calc" in HS (meaning they took the college placement test about the same time they took the final exam in June) and failed the algebra placement. When I write out a simple 3x+b=c problem and ask to solve for x, they can't do it.

That means to me that our test was correct. Is it possible this student has completely forgotten everything in just a week? Yes, I think it could be. And I think that could be why the exams are ineffective. There is, of course, another explanation, which might correlate with NCLB results for that school. One way to investigate the cause is to give the placement test much earlier, during the school year. You might try that if you draw a lot of your students from a few high schools.

PS -
I'm aware of the Quantway approach, which blends a semester of developmental work with a semester of college-level math into a 2-semester sequence for students not going on to trig or calculus. What I know about its results is too early to evaluate, but this is an area where 50% would look good!
 
We found that ACT/SAT together with high school GPA was a decent predictor for first-time freshmen.

I'd like to see each question on the traditional, computer-scored, multiple-choice placement exam paired with a second question that asks:

A math class that taught me how to solve problems like this would be:
A. Much too easy.
B. A little bit too easy.
C. The right level for me.
D. A little bit too hard.
E. Much too hard

That way you could compare what the students know with what they think they know.
 
Gender matters too...

There's some research out of the UofOregon that shows that placement tests over-predict male's performance and under-predict the performance of females.

Could you write a nice linear equation where each student's Math class grades, SAT, placement test grades, and gender were dropped into the equation and poof! out comes a 'placement score' or is even that too labor intensive?

I could imagine that the admissions people could help by adding a couple of lines to a file, then the placement people add in the placement test score, and have the equation built into the database?
 
So the state-mandated test is useless. Can you make it "advisory?" Or are you mandated to abide strictly by its results?

If you can make it advisory - come up with your own internal metric that does a better job (hey, it sounds like ANYTHING would be better) and use that for actual placement. If that knots some panties at the state capitol then use the internal metric only when people score below some threshold on the state-mandated test. (though, if the test shows no correlation to outcome, even this would be arbitrary)
 
I think a chance to review ahead of time would be good for incoming students. I have had students placed in my developmental English class who didn't need to be there, but they weren't rested before the test, didn't know they would be tested and other reasons.

I think a chance to review for students who have been out of high school a few years would be good.

If a student gets an opportunity to review a day or more before the test and doesn't, but then goes on to score at a developmental level, I consider that a valid placement. I think such a student won't devote the time and effort needed to do well in a Freshman level class, so the placement in developmental is valid.
 
The last comment reminded me that we allow our students to re-take the placement exam after a short period of time. That gives them a chance to review what they can now see that they should expect.

BTW, the real problem on the math side is that they can't use a calculator on the placement test. This is strange, because at my college they CAN use a calculator on all of the K-12 tests in my state and for all college math classes until they get to certain calculus classes.

In closing:
Dean Dad, could you clarify what your prof analyzed? How do you evaluate the placement test if all students who fail it are forced into developmental classes? Was it a comparison of fail rates among those who passed and those who took a class?
 
CCPhysicist-

I suppose one could look at pass rates in whatever class students place into. In a roomful of students taking the same course (whether a remedial course or a more standard college-level course), some will have higher placement test scores than others. (Unless you have so many placement levels that the scores in every class are in a very narrow band.) See if those scores predict pass rates, grades, or whatever within the class that they are placed into. If the test is useless for distinguishing the A students from the F students in, say, pre-calc, and they're also useless for distinguishing the A students from the F students in Calc I, it is plausible that they are useless for deciding who needs pre-calc before calc.
 
Good suggestion, Alex. That is basically what we do with our institutional research tools. However, the test we are talking about is not the one used to decide if you can take calculus without taking pre-calc. (That test seems to do a pretty good job, although all one can really detect are false positives.)

My problem with the entire process is that we all know the cutoff is not a sharp one for predicting calc I based on pre-calc grades, so I would imagine it cannot be sharp for a placement test. That is why I like the idea of a formula with some other factor (like HS grades) for students on or near the borderline.
 
The two skills most predictive of success in elementary (beginning, first semester) algebra are:

a) true facility with fractions, particularly addition and subtraction, and

b) true facility with negative numbers, particularly addition and subtraction.

A quick test that focused exclusively on these two skill regions would almost certainly give you far more predictive power regarding which students really belong in remedial classes versus which students are likely to succeed in introductory algebra.

Not that any test could be worse than what you're using now, evidently.
 
I'm just happening upon this. I am working in administration in a community college in the rust belt. We're working on an "accelerated" developmental ed curriculum as well, to eliminate the number of exit points at which we lose students. Would you consider writing MORE about this, as our pilot won't be until Spring 2013. Do you know if other schools are jumping on this? Right now we're looking at the California/Chabot College model. You rock. Thanks.
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?