Wednesday, July 27, 2011

If I Could Get Away With It...

I’d have everyone on campus read Moneyball, by Michael Lewis.

It’s several years old now, but its point still holds. It was about a general manager of a baseball team who figured out years before his colleagues that he could test the ‘folk wisdom’ of scouts against actual statistics. In some cases, the stats showed that what the scouts took as received truth didn’t hold up. But there’s no way that an individual scout, looking only at his own players, would ever know that; the patterns only showed up when you abstracted from individual experience and looked at large numbers of cases over time. (For example, this general manager realized before most others did that “on base percentage” mattered more than “batting average,” so he was able to make some lopsided trades.)

Some people within baseball read Moneyball but missed the point. They replaced the old rules of thumb with the new ones. The point was that all wisdom needs to be tested empirically, and that what works can change over time. On-base percentage was an example of the point, rather than the point itself. Once the rest of the sport wised up to on-base percentage, that measure lost its usefulness for improving a team. (Now they’re trying to develop good stats for fielding.)

Although the context for the argument is baseball, the point is true outside it. Empirical data over large numbers of cases can contradict folk wisdom that seems right. And when it does, it’s time to call that folk wisdom into question.

I mention this because I keep running across a few programs that believe, with all sincerity and more than a little self-righteousness, that they are the absolute best at what they do. They’re quick with anecdotes and testimonials, and they can tell stories that go back decades. And the numbers -- the actual, honest-to-blog numbers -- show they’re wrong.

But these are the kind of numbers that require looking at a decade of performance and hundreds or thousands of students. You won’t see those, or figure them out, through the daily experience of teaching classes. They aren’t apparent at ground level. The folks who honestly believe that the programs are successful aren’t lying, any more than the scouts who picked the wrong players were lying; they’re just wrong. Sincere and well-meaning, yes, but wrong.

This is one area where I believe that administrators have something very real to contribute to discussions of curriculum. If a program that, say, is supposed to improve graduation rates actually harms them, it’s easy not to see that in the face of real success stories and impassioned advocacy. But not seeing it doesn’t make it go away. Having someone whose job it is to give the view from an external perspective has real value. That’s not to discount the experience of the folks in the trenches, but sometimes the aerial view can show you things that the folks in the trenches aren’t well-positioned to see.

Culturally, this has proved a difficult argument to make. It’s hard to tell a proud staff with years of confirmation bias behind it that the numbers don’t mesh with their story. They react defensively, and some decide that the best defense is a good offense. But reality is stubborn.

It’s becoming clear to me that there’s cultural work to be done before the statistical work can show its value. People have to accept the possibility that what they see right in front of them, day after day, doesn’t tell the whole story, and that it can even lie. They need to be willing to get past the “I’m the expert!” posturing and accept that truth is no respecter of rank.

And that’s why, if I could get away with it, I’d have everyone read Moneyball.

Wise and worldly readers, assuming that there’s little appetite for allegories drawn from professional sports, is there a better way to make this point? A way that fits with academic culture, and that allows everyone to save face, but that still gets us past the discredited legends of the scouts?