Thursday, September 12, 2013

 

Predictions and Data



In the comment thread to this piece, several people discuss a great question: how is it possible to simultaneously base decisions on data, and innovate?  After all, anything new, by definition, won’t have data; anything for which the data are solid, by definition, won’t be new.  If the owl of Minerva spreads its wings at dusk, what do you do in the meantime?

I liked the question a lot because I’ve seen the conflict on the ground repeatedly.  And it speaks to a habit of mind that I had to unlearn when I moved into administration.

I’ll concede upfront that there’s a basic philosophical issue around causality.  David Hume famously pointed out that inductive reasoning can never be entirely certain: just because the sun has risen in the East every single day since time immemorial doesn’t necessarily mean that it will tomorrow.  It’s very likely to -- I’m not worried about it -- but probability isn’t certainty.  

In the case of the sun rising, the data is so strong, and so in line with our intuitions, that for all practical purposes it’s a non-issue.  But with on-campus issues, the problem of inference and causality is real.  Last year we made new student orientation mandatory, and retention improved.  Did the former cause, or at least contribute to, the latter?  And how can we know?

When something new comes along, it’s easy to object that the idea is “unproven.”  Taken to its logical conclusion, that position quickly becomes “never do anything for the first time.”  It’s deadening.  

I won’t pretend to have solved Hume’s objection.  But there are some expedients that we can use for practical purposes that can improve decision making even when the data are nonexistent.

First, when setting up a new project, build assessment measures and mechanisms into it from the start.  What would success look like?  How will you know if it worked?  What are your realistic goals?  (A few years ago I was in a meeting with a now-departed colleague, in which we discussed retention goals.  She suggested a goal of 100 percent.  I suggested gaining a point per year.)   

Second, start small enough that you could afford failure.  Not every brilliant idea succeeds.  If you assume that some percentage of experiments will fail -- I’d suggest that’s inherent in the definition of “experiment” -- that you need enough slack in your system that some failures won’t kill you.  

Third, have a ramp-up process in mind.  This is where many grant-funded projects come to grief, and it’s why administrators everywhere get nervous looks on their faces when the subject of “boutique” programs comes up.  Let’s say that the initial results from the small intervention are positive.  Now what?  Have you designed the project in such a way that you could scale it up to significant size with the budgets you could realistically expect to have?  It’s pretty well-established that if you skim the cream of your best faculty and best students and give them outsize resources, you can beat your own average with that cohort.  But if it’s unsustainable, what have you done?  If it can’t survive scaling, it’s of limited usefulness.  Build to tolerate failure, yes, but also build to tolerate success.

The habit of mind I had to unlearn was looking for the unassailable position.  Grad school teaches, among other things, that if you meet a theory on the road, you try to kill it.  The idea is to develop the skills to spot flawed arguments, so you can build strong ones.  But in administration, if you wait for that kind of clarity, you’ll wait until the issue is moot.  You have to learn to make peace with the reality of partial information.  No, we don’t yet know whether making new student orientation mandatory was a difference-maker.  But we couldn’t hold off on the Fall semester until we knew; the Fall semester started when it started, and we had to make a call.  So we did the best we could, and decided to keep it.  The data will flow in again, and we’ll reconsider.  

Yes, in pure theoretical terms, data-based decisions and innovation are opposed.  But if you’re willing to give up the idea of perfection, they can actually help each other.  You just have to fill in the gaps as best you can.

Comments:
"... when setting up a new project, build assessment measures and mechanisms into it from the start." Bingo. You don't even have a clear idea of your objectives until you define how you will assess it. Ideally, those assessments would be among the ones already being applied to an existing program. You must have baseline data.

"When something new comes along, it’s easy to object that the idea is “unproven.”" That is too easy, because the counter objection is that the old idea is quite likely also unproven. Is it assessed? What was it compared to, and when?

"start small enough that you could afford failure" BUT, I would add, big enough to have statistics rather than anecdotes.

One of our experiments started with the cream of the faculty (but not selected students) to sort out the details with enough students to spot obvious issues, but it wasn't an experiment until they moved toward a 50:50 effort where you could compare comparable situations. There were both full time and adjunct faculty in both groups, for example.
 
I think it's important to differentiate between goals, aspirations, values, and objectives.

You value student success, therefore, you aspire to 100% retention and make your goal to improve 1% a year by achieving the objective of 100% intrusive advising to all students with characteristic X.

Each year, you try a few things that might help and retain those that appear to work. Eventually, your system has evolved or improved to better deal with your conditions. The key is to have everyone share at least some of the same values, buy into the same aspirations and agree that the goals are resonable and objectives supportable.
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?