Tuesday, September 24, 2013


Look! A New Thing!

A longtime correspondent writes:

So here's one for you.  I'm on the planning committee (at a multi-campus institution) for a project to develop individual campus initiatives designed to improve the delivery of higher education.  Each campus develops a campus team, those teams develop a project (with some assistance and guidance from the planning committee), and begin implementation over a 2 year period.  The first project developments began in the 2012-2013 year and the two years that we on the planning committee are involved ends in May 2014.  So far, seems OK. But here's the kicker.  There's going to be a new round of this beginning this academic year, with project teams identified for a 2014-2016 cycle.  Or, just as the first projects begin to be implemented, everyone's attention turns to developing new projects that will begin to be implemented just when everyone's attention turns to developing new projects... To say nothing of what happens if any campus get a new chief administrative officer (a/k/a president) or a new chief academic officer, or if the system gets a new academic VP...

I can’t pretend not to recognize this, at least a little bit.  I’m quite a fan of experimentation, and someone who made a list of every single thing we’re trying at any given moment could, if so inclined, make things look a little busy.  And I say that without apologies.

A few thoughts.

First, it’s crucial to recognize the life cycle of experiments.  The design phase always takes longer than it seems like it should, but that’s okay if you’re doing it right.  Build in assessment metrics, definitions of success, and “what if it works?” scenarios.  If there’s no forethought given to scaling up or following through, then I’m not sure why it’s being done.  If an organization is too easily distracted by the shiny object of each successive month, it will forever trap itself in the high-effort, low-payoff part of the learning curve.

Second, it’s crucial for the various experiments to have a common, discernible theme.  What is the point of it all?  Do people know how this year’s experiment follows from the previous one, or does it just seem like a random parade of stuff?

Third, and this one is tough, it’s crucial to admit, publicly, when something doesn’t work.  People have long memories, and the “erase the comrade from the photo” approach tends to breed an understandable cynicism.  Having a coherent story over time can turn short-term failures into learning moments.

Finally, though -- and I’m still chewing on this one -- there’s a complicated and difficult relationship between high-level turnover and people’s willingness to take risks.  The comment about the new VP rings true.  Folks who have been around for a long time can tell tales of priorities shifting abruptly when a new chief arrived.  It may seem counterintuitive, but relatively stable leadership can actually lead to a more venturesome culture, if the leadership makes a point of supporting it consistently.  Relative stability can lead to a visceral sense of trust that a failed experiment won’t result in the rolling of heads.  Depending on priorities, of course, it can also lead to complacency, or favoritism, or vanity projects, or whatever else.  But if you have predictably progressive leadership for an extended period, good things can happen.

In this case, it sounds like the attitude is right, but the follow-through and narrative coherence are missing.  In its willingness to embrace the next new thing, your leadership group isn’t tending to the second (and more important) half of the experimental life cycle.  

I’m not sure how to work around that.  I was in a similar situation many, many years ago, in which I tried repeatedly to connect the dots for the folks higher up, but they just kept doing what they were doing.  Eventually, when it became clear that my entreaties were falling on deaf ears, I left.  

Wise and worldly readers, have you found or devised effective ways to get a leadership group that’s too easily entranced by the latest shiny object to develop some patience?

Based on my years of experience at a large R1, I recommend forming an ad hoc committee to study the problem, then issuing a report recommending that the ad hoc committee be made permanent.
RE comment above:
... until every professor is head of a program committee. !!

"Build in assessment metrics, definitions of success, and “what if it works?” scenarios."

Absolutely. From the start. The assessment needs to be done during the planning stage to document the problem you expect to fix. That was a key part of one of our CCs most successful initiatives. And then wait for the answer, including after each scale-up attempt.

and then

"... admit, publicly, when something doesn’t work."

ROFL at the “erase the comrade from the photo” approach!
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?