From much of the discussion of 'data-driven' reforms that take place at the national level, you'd think that all that we'd need to do is educate some administrators on the use and interpretation of data, tell them to use what they've learned, and that would be that.
If only it were that easy...
Over the last couple of years, I've been pushing a much stronger adherence to data-driven decision-making. In the process, I've seen firsthand many of the obstacles to its adoption. I put these out there not in the spirit of opposing data-driven decisions -- I still think it's a damn good idea -- but in the spirit of explaining why it's so hard to move from idea to implementation.
First, there's the availability of personnel. Although most colleges have Institutional Research (IR) departments, they're typically understaffed and overwhelmed with federal and philanthropic reporting requirements. It's harder to argue internally for resources for an IR staffer than for a direct service provider, since most of what IR does is at at least one remove from working with students. If you don't get that Nursing professor, you'll have to shrink the program next year; if you don't get that IR staffer, well, what will happen, exactly? Since it's harder to argue for short-term direct benefits, it tends to lose battles to positions that are easier to explain. While that makes short-term sense, over time it means that the quantity and quality of data at hand will be limited.
Second, there's the data itself. Looking backwards, we can know only what we thought to track at the time. Sometimes we can rejigger old data to tell us new things, of course, but if certain key questions weren't asked -- it usually takes the form of "we didn't flag that in the system" -- then it's just not there. Colleges weren't designed as research projects, so a great deal of what has gone on over time was done without any kind of eye towards future research. Even when something is done self-consciously as a 'pilot' -- meaning as a research project -- it's often difficult to isolate the relevant variables. Did the small project succeed because it was well-designed, or because it was small?
Third, there's the clash between the need to plan ahead and the need to wait for the data. The owl of minerva spreads its wings at dusk, but we can't always wait that long. When you have to set up next year's schedule before the results from this year's experiment are in, you have to default to hunches. If the program in question uses the gap time to tweak its own delivery, it can always explain away the first, lousy results with "yes, but that's before we did such and such." Worse, in any given case, that could be true.
Then, there's the clash between the drive to innovate and the deference required to "past practices." This can sound trivial, but it's actually a major issue.
For example, one of the Gates foundation programs contemplates setting up dedicated classes for at-risk students in which the program manager serves as the primary contact person for the students, including being their academic advisor. The idea is to give students a trusted person to go to when issues arise. But the union here has taken the position that academic advisement is unit work, and can only be done by unit members. Since management is not unit work by definition, we can't follow the Gates guidelines even if we wanted to. It's a shame, too, since the program seems to have good early results where it has been tried.
The 'past practice' issues become hairier when you look at 'modular' or 'self-paced' alternatives to the traditional semester schedule. By contract, faculty workloads are based on credit hours and the semester calendar. (Similar expectations hold in the realm of financial aid.) If you break away from those models, you have to address workarounds for financial aid -- which have serious workload impacts for the financial aid staffers -- and unit concerns about workload equity. Maintaining workload equity while experimenting with different workload formats is no easy task, and some unit members are just looking for an excuse to grieve, for reasons of their own. It's not impossible, but the process of 'impact bargaining' and its attendant concessions amounts to an unacknowledged deadweight cost. That's before even mentioning the time and effort involved in dealing with grievances.
Then, of course, there's the tension between fact and opinion. When there's a long history of decisions being made based on group processes that have been dominated by a few longstanding personalities, they'll read anything data-driven as a threat to their power. Which, in a way, it is. I saw this a couple of years ago in a discussion of a course prerequisite. A particular department argued passionately that adding a prereq to a particular course would result in much higher student performance. The college tried it, and student performance didn't budge. After two years of no movement at all in the data, I asked the chair if he had changed his mind. He hadn't. Facts are fine, but dammit, he wanted his prereq, and that was that. Facts were fine in theory, but when they got in the way of preference, preference was assumed to be the "democratic" option. Since facts aren't subject to majority rule, reliance on facts is taken as anti-democratic.
Alrighty then.
Finally, there's the tension between the culture of experimentation and the culture of "gotcha!" The whole point of the data-driven movement is to get colleges to try lots of different things, to keep and improve what works, and to junk what doesn't. But when a college has an entrenched culture of "gotcha!," you can't get past that first failure. If something didn't work, you'll get the self-satisfied "I told you so" that shuts down further discussion. A culture of experimentation, by definition, has to allow for some level of failure, but that presumes a cultural maturity that may or may not be present.
None of these, or even all of them in combination, is enough to convince me that we should stop testing assumptions and trying new things. But they do help explain why it isn't nearly as easy as it sounds.