Monday, April 01, 2019

Early Results from Florida…


If there’s one thing that any decent American political scientist knows, it’s not to trust early results from Florida.

That said, I was happy to see some first analyses of the impact of Florida’s mandated, statewide changes to remedial placement in community colleges.  As I understand it, the law entitled recent high school graduates to bypass placement tests entirely and place themselves either into altered developmental classes or directly into college-level classes.

Placement is a knotty issue.  No single method is foolproof, and many of the people who agree in concept that placement tests are a barrier default quickly to even more restrictive alternatives, given the chance.  What if we just said “ah, whatever” and threw it open? Florida did.

As both a student of policy and a working community college administrator, I’ve been eagerly awaiting the results.  Did cutting the Gordian knot result in better outcomes?

Mostly.

The Center for Postsecondary Success does a nice writeup here, and Ashley Smith’s story in IHE provides some good context.  The study makes a key distinction between “course-based pass rates” and “cohort-based pass rates.” In math, which is typically where the greatest challenges show up, dropping the placement test led to lower course pass rates, but higher cohort pass rates.  The increases were greatest for students of color, which means that not only did more students get through, but achievement gaps were reduced. That’s the gold standard.

The way that could happen is that more students get into the college-level class in the first place.  If the volume increase is enough, then even with a slight decrease in the pass rate for that course individually, more students will have made it through overall.  A higher-enough base can make up for a lower rate.

Smith’s story points out that the law required more support for students who self-place, but didn’t necessarily provide the resources for it.  Predictably, that leads to uneven implementation, as some colleges have more resources and/or flexibility than others.

Smith’s story also points out that there may be some grade inflation lurking in those numbers.  That strikes me as conceptually easy to figure out; look at what happens to students at the next level up, whether at the next math class or the next degree level.  If the newly-successful ones quickly crashed and the numbers reverted to the mean, then yes, the “grade inflation” explanation would seem plausible. If they keep succeeding, though, it would suggest that grade inflation is either absent or situationally harmless.  In other words, it’s empirically testable. I hope somebody does.

Obviously, my interest in the outcome is based on potential portability.  Could something like that work here? There’s more work to be done, but I have to say the early signs are encouraging.  Combine those with some of what we’re seeing from other states, and the argument for maintaining the status quo is getting tough to sustain.  As well it should.

My thanks to the Center for Postsecondary Success for the study, and to Ashley Smith for an uncommonly good bit of context-setting in IHE.  There aren’t many topics as important as this one for community colleges; it’s worth getting right.