Monday, July 30, 2018

Driverless Cars and Directionless Politics


I just finished listening to Annie Lowrey’s new book “Give People Money,” which is about the policy idea of a Universal Basic Income (UBI), and it got me thinking.  What if many of the jobs for which we prepare people now are going away in a few years, to be replaced with AI-driven automation?

The question could go in several directions, of course.  Some supporters of UBI, especially some of the tech evangelists, see it as a sort of life support for the many who will have been rendered economically valueless by the robots.  If we have self-driving trucks, the argument goes, who needs truck drivers? That’s a lot of jobs eliminated in a short time. The economic danger is that if machines generate most value, then most money will accrue to the owners of the machines.  (For present purposes, “machines” could be defined to include something as ethereal as an algorithm.) Staving off mass starvation, or revolution, would require throwing the masses some bones. UBI could theoretically accomplish that.

Some libertarians advocate UBI as a wholesale replacement for public institutions.  Do away with public schools, public roads, food stamps, and the rest of it, and just send people money to use as they will.  (Charles Murray is a well-known advocate of this argument.) Let the market provide; just make sure that everyone has enough to participate in it, even if at a low level.  Murray might respond to the idea of free community college by proposing instead giving people money and letting them pay for the education they want, selected from the marketplace.  I’m not a fan of this perspective, but it’s out there.

From my perch at a community college, though, I’ll take a narrower view.  If many jobs are soon to be doomed to go the way of elevator operators, then what should we be preparing students for?  At a basic ethical level, we shouldn’t do the equivalent of training buggy-whip makers in 1910.

But when I think about the jobs for which we actually train, I don’t see where most of the AI would go.  We train K-12 teachers, but I don’t see that moving to automation anytime soon; children need human interaction.  We train law enforcement officers, whose work increasingly relies on technology, but who are still going to be human beings.  Nursing draws on tech, too, but is a decidedly human field. Yes, some restaurants have gone to touch screens to reduce waitstaff, but the back of the house is still human.  Management remains a human endeavor, at least when it’s done right. And the jobs for which students transfer to four-year schools remain largely bound to humans. Even better, some fields will require more humans with higher skills; anyone who thinks that self-contained systems are seamless hasn’t lived through software system updates.

Even that, though, strikes me as a second-order question.  Picking winners ten years in advance is a bit of a fool’s errand, even if we’re reasonably sure that winners will exist.  If I knew what the next big thing would be, I’d buy stock in it.

This isn’t the first time that technology has led to fear of mass unemployment.  Lowrey notes, correctly, that John Maynard Keynes extrapolated from trends prior to the 1930’s to predict that we would have harnessed increased productivity to reduce the workweek to fifteen hours by now.  Other thinkers made similar arguments in various ways. Oscar Wilde suggested in “The Soul of Man Under Socialism” that the best use of improved productivity would be greater leisure. David Riesman argued in “Abundance for What?” that the great social crisis of the late 20th century would be the explosion of leisure time made possible by rampant productivity gains.  (He even predicted a need for “leisure consultants,” which I think explains the emergence of the various Real Housewives series.)

I mention those not just to bemoan missed opportunities, but to suggest that there’s a more basic task at hand.

Production is largely a technical issue.  Distribution is very much a political one.  The former can be automated; the latter simply can’t.  The latter calls for engaged citizens with the skills, knowledge, and perspective to wrestle with large questions like these.  It calls for people with skills in critical reading, persuasive writing, and effective public speaking, along with an affective sense that it’s reasonable and proper for them to step up.  Aristotle suggested that manual laborers were too consumed with the stuff of life to ask big political questions, so only the elite should make such questions their business. We’ve built a country on the assumption that Aristotle was wrong.  

The economy has not rendered the classic liberal arts irrelevant.  It has made them more important than ever. Fifty years from now, there may or may not still be community colleges.  But there will absolutely still be a need for an intelligent, engaged citizenry that can control its own destiny. Yes, I’m worried about AI making certain jobs obsolete.  I’m more worried that we’ll let panic over that get in the way of developing the skills collectively to ask if there’s another possibility altogether.