Monday, March 06, 2017

 

Ethics, Design, and Control


A few weeks ago I did a piece wondering aloud about whether predictive analytics on campus could inadvertently contribute to what Claude Steele calls “stereotype threat,” and thereby become self-fulfilling in negative ways.  So I was excited to see a report from Manuela Ekowo and Iris Palmer at New America on ethics and predictive analytics.  It’s a set of recommendations for avoiding unintended negative outcomes.  It’s potentially useful, and worth keeping as a reference.

That said, though, I was struck by a couple of assumptions embedded in the report.

The first is that the action model assumes that first you plan, then you build, and then you use what you’ve built.  As a former colleague once asked me, “if you don’t know where you’re going, how will you know when you get there?”  

I’ve never been a fan of that model, though it took a while to figure out why.  It assumes a level of omniscience at the outset that’s simply untenable.  (It also assumes that implementation is largely mechanistic, with strategic thought confined to officially sanctioned strategic planning.)  Yes, it’s worthwhile to give some forethought to ways to avoid potentially damaging unintended consequences.  But it’s also important to build in a feedback loop, because there is no earthly way to anticipate them all until it’s actually up and running.  That’s not to endorse negligence or fatalism; it’s just to say that nobody is omniscient, and good design needs to be porous and iterative to reflect that.  

The second one, though, is more fundamental.  It’s the assumption that the genie can be confined to the campus.  Given the political world, it can’t be.  We need to plan for that.

Over the course of my career, I’ve been in a position several times in which I saw other people profit by using data in ways other than the folks who generated it intended it to be used.  That doesn’t imply criminality; the uses were perfectly legal.  They were just selfish.  For example, one community college I shall not name games its graduation rate by preventing students in remedial classes from taking full-time schedules in their first semester.  Since the official grad rate reflects “first-time, full-time” students, that move excludes remedial students from that college’s grad rate, and makes that college’s rate look falsely better than its counterparts’.  There’s nothing illegal in that, though I would argue that it’s unethical.  It’s what policy scholars call “gaming the system,” and it’s inevitable.  

A college under the gun to improve its retention and completion numbers -- whether because of statewide performance funding, political pressure, administrative careerism, or whatever else -- could use analytics to steer resources and efforts towards gaming the numbers.  And if one college doesn’t, one of its counterparts will.  

Worse, a college composed of people of goodwill who are actually trying to do the right thing can inadvertently give a free pass to legislators to continue to bleed it dry.  If heroic and wise efforts on campus mitigate the damage done by funding cuts, and students seem to be substantially unharmed, legislators may conclude that there’s no downside to cutting higher education funding.  Colleges would be punished, in effect, for improving.  I have personally seen that.  I once had a senior statewide official, who shall remain nameless, say to my face that “you guys are amazing.  After years of cuts, you keep getting better.”  I had to count to ten.

That kind of perversion of the completion agenda is a real danger, and one that no amount of on-campus strategic planning can prevent.  Instead, we need to anticipate it, and find ways to work around it.  That makes for a messier diagram, but a better outcome.

None of this is intended as a shot at Ekowo and Palmer’s report.  It’s thoughtful and useful, and it reflects serious concerns.  It’s just to advocate for some epistemological humility, and for a recognition that interpretations don’t stop at the level of the campus.  We can design data systems, and we can build ethical safeguards into them.  But at a basic level, we can’t assume that we’ll be able to control how other people use or interpret the outcomes.  At best, we can build in feedback loops to stop particular sorts of damage from getting worse as they become evident.  Let’s build some design thinking into the design.  Otherwise, we can wind up being punished for success, even as we congratulate ourselves on our ethics.



<< Home

This page is powered by Blogger. Isn't yours?