It’s no secret that I’ve been a fan of Kay McClenney for a long time. She puts together brilliant panels, encapsulates the obvious in useful and valuable ways (“students don’t do optional”), and bases her findings on wide-ranging empirical research. She even wrote the introduction to my book, which I considered a genuine honor.
That said, a quote of hers last week landed funny and deserves a response. In an article about the gap between what community colleges know they should do, and what they have done, McClenney notes that colleges are “piloting ourselves to death” and need to focus on scaling up.
Concur in part, dissent in part.
It’s certainly true that any “high-impact practice” won’t have much impact on the people it doesn’t touch. Improving graduation rates within a cohort of fifty students won’t make a tremendous difference in a population of thousands. And it’s also true that just crying “we don’t have money” is an easy copout that doesn’t explain how some equally strapped institutions have managed to move forward.
But for certain interventions, it isn’t as simple as that.
On my own campus, for example, we’ve made new student orientation mandatory, and we’ve blocked new students from enrolling in classes once classes have started. These changes seem to have helped with retention rates, and internal disagreement over them has been minimal. Academic planning and tutoring have been here forever. We have a learning communities program, though in that case our local insistence on running it in the most expensive possible way has limited its size.
But a change like accelerating developmental sequences is far more complex than just making new student orientation mandatory.
It’s a curricular change, which means that it needs to be faculty-driven. For that to happen, faculty have to be on board with the concept, and need to develop their own version of what it means. Once that happens, they have to convince enough of their colleagues to get it through governance. For good reasons, many colleagues are willing to take a flyer on something new, but don’t want to abandon the old until they’re convinced that the new is better. Hence, the pilot.
That’s what we’re doing with developmental math. We have two versions of acceleration going, both devised entirely by the math faculty. They’re waiting for enough results to come in to ensure that what seems to work nationally actually works here. Sometimes it does, sometimes not.
In other words, pilots aren’t necessarily copouts. They can allow for balancing shared governance, faculty ownership of curriculum, research-based innovations, and local assessment.
That’s not to say that pilots are always the answer. A pilot that can’t scale isn’t really a pilot; it’s a boutique. And I absolutely agree with the critique of boutiques. (“Critique of Boutiques” sounds like a twee indie band.) But I’ve seen promising pilots fail, and I’ve seen them help either avoid mistakes or make strong ideas stronger. Even with interventions founded on strong national research, it’s important to get the local flavor right. As the saying goes, culture eats strategy for lunch; if a given intervention doesn’t work with the local culture, it doesn’t work.
None of this directly contradicts McClenney’s point, but it suggests some nuance. Yes, pilots can be copouts. But they can also be part of a much larger process that balances multiple needs. From a central policymaking standpoint, the pace of change with pilots may be frustrating; I get that. But on the ground, they make change sustainable. A little more time upfront can lead to a much bigger payoff later. I’ll take it.