Thursday, April 22, 2010

 

"Data-Driven"

From much of the discussion of 'data-driven' reforms that take place at the national level, you'd think that all that we'd need to do is educate some administrators on the use and interpretation of data, tell them to use what they've learned, and that would be that.

If only it were that easy...

Over the last couple of years, I've been pushing a much stronger adherence to data-driven decision-making. In the process, I've seen firsthand many of the obstacles to its adoption. I put these out there not in the spirit of opposing data-driven decisions -- I still think it's a damn good idea -- but in the spirit of explaining why it's so hard to move from idea to implementation.

First, there's the availability of personnel. Although most colleges have Institutional Research (IR) departments, they're typically understaffed and overwhelmed with federal and philanthropic reporting requirements. It's harder to argue internally for resources for an IR staffer than for a direct service provider, since most of what IR does is at at least one remove from working with students. If you don't get that Nursing professor, you'll have to shrink the program next year; if you don't get that IR staffer, well, what will happen, exactly? Since it's harder to argue for short-term direct benefits, it tends to lose battles to positions that are easier to explain. While that makes short-term sense, over time it means that the quantity and quality of data at hand will be limited.

Second, there's the data itself. Looking backwards, we can know only what we thought to track at the time. Sometimes we can rejigger old data to tell us new things, of course, but if certain key questions weren't asked -- it usually takes the form of "we didn't flag that in the system" -- then it's just not there. Colleges weren't designed as research projects, so a great deal of what has gone on over time was done without any kind of eye towards future research. Even when something is done self-consciously as a 'pilot' -- meaning as a research project -- it's often difficult to isolate the relevant variables. Did the small project succeed because it was well-designed, or because it was small?

Third, there's the clash between the need to plan ahead and the need to wait for the data. The owl of minerva spreads its wings at dusk, but we can't always wait that long. When you have to set up next year's schedule before the results from this year's experiment are in, you have to default to hunches. If the program in question uses the gap time to tweak its own delivery, it can always explain away the first, lousy results with "yes, but that's before we did such and such." Worse, in any given case, that could be true.

Then, there's the clash between the drive to innovate and the deference required to "past practices." This can sound trivial, but it's actually a major issue.

For example, one of the Gates foundation programs contemplates setting up dedicated classes for at-risk students in which the program manager serves as the primary contact person for the students, including being their academic advisor. The idea is to give students a trusted person to go to when issues arise. But the union here has taken the position that academic advisement is unit work, and can only be done by unit members. Since management is not unit work by definition, we can't follow the Gates guidelines even if we wanted to. It's a shame, too, since the program seems to have good early results where it has been tried.

The 'past practice' issues become hairier when you look at 'modular' or 'self-paced' alternatives to the traditional semester schedule. By contract, faculty workloads are based on credit hours and the semester calendar. (Similar expectations hold in the realm of financial aid.) If you break away from those models, you have to address workarounds for financial aid -- which have serious workload impacts for the financial aid staffers -- and unit concerns about workload equity. Maintaining workload equity while experimenting with different workload formats is no easy task, and some unit members are just looking for an excuse to grieve, for reasons of their own. It's not impossible, but the process of 'impact bargaining' and its attendant concessions amounts to an unacknowledged deadweight cost. That's before even mentioning the time and effort involved in dealing with grievances.

Then, of course, there's the tension between fact and opinion. When there's a long history of decisions being made based on group processes that have been dominated by a few longstanding personalities, they'll read anything data-driven as a threat to their power. Which, in a way, it is. I saw this a couple of years ago in a discussion of a course prerequisite. A particular department argued passionately that adding a prereq to a particular course would result in much higher student performance. The college tried it, and student performance didn't budge. After two years of no movement at all in the data, I asked the chair if he had changed his mind. He hadn't. Facts are fine, but dammit, he wanted his prereq, and that was that. Facts were fine in theory, but when they got in the way of preference, preference was assumed to be the "democratic" option. Since facts aren't subject to majority rule, reliance on facts is taken as anti-democratic.

Alrighty then.

Finally, there's the tension between the culture of experimentation and the culture of "gotcha!" The whole point of the data-driven movement is to get colleges to try lots of different things, to keep and improve what works, and to junk what doesn't. But when a college has an entrenched culture of "gotcha!," you can't get past that first failure. If something didn't work, you'll get the self-satisfied "I told you so" that shuts down further discussion. A culture of experimentation, by definition, has to allow for some level of failure, but that presumes a cultural maturity that may or may not be present.

None of these, or even all of them in combination, is enough to convince me that we should stop testing assumptions and trying new things. But they do help explain why it isn't nearly as easy as it sounds.

Comments:
I arrived at your blog today via a tweet from @JeanetteMarie, a colleague who writes about adult learner issues at her blog (Learn, Unlearn, Relearn) and also commuity college issues.

I did assessment projects for Residence Life at Penn State, and can say that your comments are dead-on.

The difficulty of navigating institutional politics can be especially frustrating. It isn't just about what to justify or what to change or get rid of. Many projects stumble coming out of the gate, due to disputers about who should be doing the assessment in the first place...the department, the division's assessment officer, or the college's formal "institutional assessment" office.

Such political wrangling gets in the way of common sense, because the higher up (or furhter out) the chain you go, the less connected you are to the issues which led to the reason for assessment in the first place.

It's encouraging that so many master's programs (from which we draw new staff) are emphasizing assessment skills. Hopefully, we'll work to get over some of our hangups and learn how to harness these skills and properlu direct them. Until then, I guess we have no choice but to keep "coralling the cats" until some of them come home.
 
This same sort of a challenge exists in the corporate world the consequences of failed risk taking and attempts at innovation can be so strong that individuals are afraid to try anything new or different. What results is a stagnant organization that does things the same way they've always been done.

Some organizations have caught on to this danger and not only encourage risk taking but actually celebrate some of their failures. They've found that the best way to counter the fear of failure when trying new things is to celebrate some of the great failures. It's a way to relieve some of the pressure and fear of failure and also use the failure as an opportunity to say, "Even though this didn't work out, it was a great idea. What can we learn from the failure and what might we try differently next time?"
 
Our IR department misreported the enrollment data for two of our majors (a general BA and a concentration) for 4 years. This led some faculty in our department to believe that the enrollment for a particular part of the curriculum was much higher than it was and to fight viciously for new faculty. It finally all got sorted out but the political damage done will persist for the next decade or so.

IR is important and when it gets screwed up, it causes real problems.
 
Another thing is that while there are very good things about being "data driven", it is also a buzzword right now. I've seen people generate thick stacks of dead trees with largely meaningless "data" in it, but there are numbers and discussions of the numbers and buzzwords in it. I suspect that some of the people up the ladder actually see through it, but since they have to pull teeth to get anybody to do assessment they can't exactly turn around and start chewing out the only people who actually bothered to do some work on it.

I've seen other people get excited over numbers that mean nothing. My department chair is excited because we're now keeping records of attendance at seminars. That would be great if he wanted to use this data to justify things related to seminars, but he thinks we can use it to justify all sorts of things only very, very peripherally related to the seminar (if at all) because "It's numbers! It's assessment!"

Well, it's definitely something with "ass" in the name...
 
Three thoughts:

Although most colleges have Institutional Research (IR) departments, they're typically understaffed and overwhelmed...

Spoken like an ex-literature prof. :) Let your IR people stick to record-keeping, then take a stroll over to your math and social science departments and you'll find more than enough expertise for your analysis. If you turn over enough rocks you might even find someone like me that is already doing their own assessment and would help you with college-wide data for little more than kudos and a beer. After all, you can tell him, curricular decisions are faculty governance, right?

There is also more to the time-frame issue you raise. Your desire to frame a question, test it, and implement the answer "next fall" strikes me as horribly out of place in an academic setting. Maybe welding schools work that way, but not people who teach from 2500 year-old texts and wear 500 year-old costumes even when it's not Halloween. Try something like "have an idea one semester, talk about it the next, try the test the following semester, chuck those results out because you later realized that your tool was flawed (it always is), try the revised tool again for a few semesters so you have enough data to be reliable, and then a semester to let the results percolate through the relevant committees." You'll have a lot more luck convincing people that way. There is more to this than a vague stodginess, though.

Putting my research methods instructor's hat on, I'm pretty confident that any data you gather in a short-term (one-year) project is subject to enough unknowable measurement error to warrant a great deal of skepticism. Personally, I wish that "data-driven" advocates would think about their results like a Bayesian, in cycles of "start with a belief, compare it to a piece of the world, modify the belief a little, compare it to the world, again, and again." Not only does that produce more robust results, it also makes a much more convincing argument than "our quick study last semester said A so we're going to re-write our curriculum to feature A."

Finally, I've listened to numbers for years, but you know, I've never heard them speak for themselves. The notion that "facts are facts" and there is only one implication thereof is a superficial way of looking at data. While the example you cite may be more cut and dried than most, I would suspect even that one is more nuanced than you describe. Some of the negative reactions you encounter may be from people who resist your assertions about the "facts" (especially from short-term research).
 
I'd like to believe in data driven decisions, but my experience with writing our department assessment plan and watching this year's budget meltdown suggests otherwise.

I like collecting data on teaching and I do think it can be useful to shape curriculum development. It even helps me improve my teaching, but it has become painfully clear to me that is not the data that counts.

Due to budget issues, the four year comprehensive where I work has decided to double the size of our introductory surveys. The idea is that we should be a 'cash cow' for the college of liberal arts. And we need to do this with fewer faculty members.

One of the department learning outcomes is writing proficiency demonstrated by a senior thesis. Well, all of our majors take six survey classes and we expect them to write essays in those classes. Doubling the size of the survey means scantron tests instead of essays.

So a curriculum decision that should have been made on the basis of assessment data was actually made on the basis of budget data. That decision effectively blows up our assessment plan too. Next time I will not spend as much time working on that assessment plan, because frankly, I know that the budget makes it irrelevant.
 
I'm curious about the 500 year old costumes...
 
Rubashov has it right - make sure that the IR people (a) collect the data needed for standard reporting, (b) understand what it means, (c) make nice with the database and admin people so that they know the sources, and also who knows about data being collected for operations use which isn't involved in IR (yet).

Then get some stats and social science people involved. Please note that local universities have some of these, and are frequently willing to help. Especially if publications and dissertations are involved.

If things are going well at that level, *and* the funding is secure for years, then hire a stats guy yourself.
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?