Sunday, October 20, 2013


“Piloting Ourselves to Death”

It’s no secret that I’ve been a fan of Kay McClenney for a long time.  She puts together brilliant panels, encapsulates the obvious in useful and valuable ways (“students don’t do optional”), and bases her findings on wide-ranging empirical research.  She even wrote the introduction to my book, which I considered a genuine honor.

That said, a quote of hers last week landed funny and deserves a response.  In an article about the gap between what community colleges know they should do, and what they have done, McClenney notes that colleges are “piloting ourselves to death” and need to focus on scaling up.

Concur in part, dissent in part.

It’s certainly true that any “high-impact practice” won’t have much impact on the people it doesn’t touch.  Improving graduation rates within a cohort of fifty students won’t make a tremendous difference in a population of thousands.  And it’s also true that just crying “we don’t have money” is an easy copout that doesn’t explain how some equally strapped institutions have managed to move forward.

But for certain interventions, it isn’t as simple as that.

On my own campus, for example, we’ve made new student orientation mandatory, and we’ve blocked new students from enrolling in classes once classes have started.  These changes seem to have helped with retention rates, and internal disagreement over them has been minimal.  Academic planning and tutoring have been here forever.  We have a learning communities program, though in that case our local insistence on running it in the most expensive possible way has limited its size.  

But a change like accelerating developmental sequences is far more complex than just making new student orientation mandatory.

It’s a curricular change, which means that it needs to be faculty-driven.  For that to happen, faculty have to be on board with the concept, and need to develop their own version of what it means.  Once that happens, they have to convince enough of their colleagues to get it through governance.  For good reasons, many colleagues are willing to take a flyer on something new, but don’t want to abandon the old until they’re convinced that the new is better.  Hence, the pilot.  

That’s what we’re doing with developmental math.  We have two versions of acceleration going, both devised entirely by the math faculty.  They’re waiting for enough results to come in to ensure that what seems to work nationally actually works here.  Sometimes it does, sometimes not.

In other words, pilots aren’t necessarily copouts.  They can allow for balancing shared governance, faculty ownership of curriculum, research-based innovations, and local assessment.  

That’s not to say that pilots are always the answer.  A pilot that can’t scale isn’t really a pilot; it’s a boutique.  And I absolutely agree with the critique of boutiques.  (“Critique of Boutiques” sounds like a twee indie band.)  But I’ve seen promising pilots fail, and I’ve seen them help either avoid mistakes or make strong ideas stronger.  Even with interventions founded on strong national research, it’s important to get the local flavor right.  As the saying goes, culture eats strategy for lunch; if a given intervention doesn’t work with the local culture, it doesn’t work.  

None of this directly contradicts McClenney’s point, but it suggests some nuance.  Yes, pilots can be copouts.  But they can also be part of a much larger process that balances multiple needs.  From a central policymaking standpoint, the pace of change with pilots may be frustrating; I get that.  But on the ground, they make change sustainable.  A little more time upfront can lead to a much bigger payoff later.  I’ll take it.


What I recommend, based on local experience with something like you describe, is building in some scale-up at the next convenient cycle, perhaps even before you have all of the data processed from the first step.

Refining it while also trying out how to bring in another full timer plus a long-time adjunct will help the scale up process if it works. That will make the next test point (whether a broader range of instructors will get the same results) easier to get to.
What "works" nationally also needs to be further tested. There are a number of studies which show active learning improves learning outcomes. This has been seen in intro biology courses for example. However, follow-up randomized studies suggest active learning by itself does not improve learning. The power of selection can not be underestimated.

Why the disconnect? The individual studies reported on non-random data sets. They were implemented by people interested in improving learning outcomes enough to be part of such studies. That is a self-selecting group. Likely there is more going on than just active learning leading to improved outcomes. Given the educators in these cases are also the researchers they miss these extra factors. They see more in their data than can be actually stated. They confuse new hypotheses with conclusions.

Too many studies over interpret data in this manner leading to conclusions that are not justified by the results. There are limits on what we can say on what works and what doesn't.
The challenge that I see, assessing programs across lots of K-12 and higher ed institutions, is that when there is no pilot, there is no way to tell if what the school is doing actually works. (No control group.) But, OTOH, often schools are adopting too many new things all at once, so there would be no way to tell, anyway. Plenty of theories on why this happens, but that's a little off topic to the post today. Agree with the other Anonymous.
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?