Thursday, January 30, 2014
Libby Nelson, of Politico, asked the other day on Twitter why it is that graduation rates at two-year for-profit colleges are higher than at community colleges, even though graduation rates at four-year for-profit colleges lag their public counterparts.
The standard move would be to explain why graduation rates are a poor measure of community colleges, especially when those rates are based only on the IPEDS cohort (first-time, full-time, degree-seeking students, who are a distinct minority of our student body). And that’s true, as far as it goes. But there’s more to it than that.
Tressie McMillan Cottom and I jumped on the question, because we’ve both worked in both for-profit and non-profit higher ed. We’re both sector-jumpers. And there are things that are readily apparent to sector-jumpers that may not be readily apparent to folks who haven’t been in the bellies of both beasts.
What do many for-profits do differently from community colleges that helps with grad rates?
- Minimal remediation, if any.
I was amazed, when I moved from DeVry to CCM, at the shift in the percentage of students who placed into developmental English. At DeVry, it was in the single digits. At CCM, it was the majority. I learned later that CCM’s percentage was fairly typical for community colleges as a sector.
Since I taught freshman comp at DeVry for a while, I can attest that the placements weren’t because the students were all fully polished upon arrival. They were not. 101 was a punishing course to teach, since you had to try to meet students where they were.
Math was a different issue, but even there, there was a premium on putting students in the highest level class they could conceivably pass.
- Backloading Gen Ed classes. Or, eat dessert first.
Students at for-profits are there to get jobs. Typically, that’s the key selling point that recruiters use. And since many students have had checkered academic pasts, they’re sensitive to revisiting scenes of earlier failures.
Most traditional colleges force students to eat their vegetables -- basic math, English, and the usual distribution requirements -- before getting to what the students recognize as the reason they’re there. As one of the beleaguered gen ed faculty, I heard students ask every single semester why they had to take my class.
DeVry, and apparently other for-profits as well, noticed that. It offered a lot of A.A.S. degrees -- associate’s of applied science, as opposed to associate of science or associate of arts -- to reduce the amount of gen ed. And the gen ed courses it did require were spread evenly through the program, or even backloaded. Students started with dessert, and only got to the veggies at the end.
It struck me as counterintuitive at first, but over time, I saw the logic. If you recruit with visions of becoming a telecommunications tech -- that was big at the time -- but start by sticking students in a whole bunch of plain vanilla academic classes, they’ll collectively smell a rat. But if you give them hands-on, obviously-relevant stuff from day one and get them hooked, eventually they’ll decide that they’ve put in enough time that they aren’t going to walk away just to avoid a psychology class.
- Highly visible career services.
Again, if you’re selling placement, then you have to stay on-message. Many traditional colleges require a “college success” course. DeVry did that, but it also required a “career development” course that covered resume writing, interview wear, and the like. Students (mostly) liked the latter, even though they griped freely about the former. (The main objectors to the career development course were students over 40. I couldn’t blame them.) Say what you will about giving academic credit for that, but the simple truth was that many of the students did not come in with the cultural capital to know what constituted “professional” dress, or how to handle an interview. Rather than just sloughing that off as the student’s problem, the institution tackled the problem directly. There’s merit in that.
None of these measures is entirely to the good, but they’re potentially useful for community colleges to consider. On my own campus, we’ve found that students in developmental math do better when the class is “linked” to a course in their intended major, which is our variation on “eat dessert first.” When the students are motivated by contact with what they really want, they’re likelier to endure the veggies. We’ve moved career advising to the first semester, to help students identify goals before they choose majors. And we’re looking at ways to help students get through developmental coursework more quickly, so they don’t just throw up their hands in frustration and walk away. We’re doing it to benefit the students, rather than to make money, but we’re doing it.
The for-profits are open to all sorts of criticisms. I left the sector for a reason, and I’m glad to be back among nonprofits. But writing them off without learning from them is a waste.
Thanks to Libby Nelson for a great question, and to Tressie McMillan Cottom for deepening the discussion. She’s really good at that.
I’m hoping to put together a conference panel with other sector jumpers, ideally with Tressie on board too. If you’ve worked deeply in both, I’d love to hear from you at deandad (at) gmail (dot) com.
Wednesday, January 29, 2014
Has anyone out there figured out how to quantify the number of students who would have signed up for a given class if seats were available?
We don’t have a system for waiting lists, which would be the most obvious way. I’m told, by people who have worked in places that had waiting lists, that they’re nightmares to manage. Apparently, when the waitlists are automated, students will game the system by signing up for far more classes than they actually intend to take, and then cobbling together the most amenable schedule they can at the last minute. As a result, the waitlists are full of people who don’t really mean it. And if you put them in automatically and force them to back out again when they’re clogging the system, you create a manual processing nightmare in financial aid.
If you keep manual waiting lists, the issues are even worse. Say a section is capped, and full, at 32. You keep a list of five students who want ‘in.’ A student drops, thereby opening a seat. But before you -- the holder of the list -- notice, another student has gone online and captured the seat. That student has effectively leapfrogged your list. Once word gets out that that’s possible, you might as well not have waiting lists at all.
Because we don’t have an elegant way to handle those dilemmas, we don’t have wait lists. Instead, we have classes that fill, classes that run below top capacity, and classes that get canceled for low enrollment. (We don’t run tiny classes on a pro-rata basis; that was an institutional decision made years ago.) When we have to cancel small sections, we try to do it late enough to be sure they wouldn’t have filled, but also early enough to be able to find reasonable alternatives for both the professor and the students. It’s a necessarily frustrating and imperfect process, since it involves guessing about potential demand from late registrants.
After some students get shut out of cancelled sections, we start to hear about students who would have taken x if seats had been available. Some students are relatively vocal, others just mutter things to someone in passing, and others take to social media. But it’s hard to tell how many students fall into the “woulda” category for a specific class. Volume of complaint is a terrible indicator, given that some students are louder than others, and some of the loudest complainers have the most constraints on their schedules.
The issue was less urgent when we were bursting at the seams with students. At that point, the major challenge was just adding capacity within severely slashed budgets. Now, though, as the 2009-10 surge recedes and we’re looking at a decade or more of declining numbers of high school grads in the area, it’s becoming more important not to leave enrollments on the table.
Online classes make the question somewhat easier, since it’s far less challenging to add server space than to add classroom space, and there’s no issue of timeslots. But we still need faculty, and still need to be able to afford to pay them. Online courses help, but don’t solve the entire problem.
I’m guessing that the most sophisticated online merchants track more than just purchases; they also track attempted purchases that fell short, people who just looked (what IRL we call “window shopping,” but I don’t know what the online equivalent is --- browser shopping?), and what people defaulted to when a given item wasn’t available. But I don’t know of any ERP systems that do that.
Wise and worldly readers -- especially those in enrollment management -- has anyone out there found a realistic and helpful way to track the “woulda” enrollments?
Tuesday, January 28, 2014
The latest from the Community College Research Center -- “Community College Economics for Policymakers: The One Big Fact and The One Big Myth” by Clive Belfield and Davis Jenkins -- is a must-read.
Belfield and Jenkins argue that current policy debates around community colleges are misguided because they fail to account for one big fact and they incorrectly believe one big myth. The big fact is that the personal and social returns on investment in community college education are substantial and growing. The big myth is that community college’s financial troubles are the result of inefficiency.
I strongly recommend reading the entire piece yourself. Even the parts I don’t entirely agree with are well-argued, and the core of the argument is both correct and badly needed.
The subtext of the piece, I think, is an accurate sense that many of the issues facing community colleges in the political sphere stem from category mistakes. Anger at lavish dorms at private universities gets directed at community colleges, where lavish dorms simply don’t exist. Anger at a poor job market gets directed at community colleges, even though the returns on investment for students has remained positive, even during a nasty recession. Facile generalizations about “higher education” based on universities get applied to a sector with an entirely different internal logic and business model. Concerns about student loan burdens, largely generated in the for-profit sector, get applied to the lowest-tuition part of American higher education.
I was particularly struck by their examination of what community colleges actually receive and spend. Although tuition and fees have increased over the past ten years, all of the increase -- and slightly more -- can be explained as cost-shifting from the state (or county) to students. Actual spending by the colleges has actually dropped. And that’s in the face of Baumol’s cost disease, which the authors acknowledge.
The difference has been made up through a host of strategies, most notably the shift to ever-higher percentages of adjunct faculty. If you don’t notice the underlying cost-shift, which most people wouldn’t, then it looks like colleges are charging more and offering less. In fact, both are symptoms of attempting to address relatively low productivity growth in a setting of externally determined austerity.
The paper is more hostile than it needs to be towards certain innovations, but in the context of the larger argument, that’s a footnote. Check the paper out in its entirety, forward it to powerful people, learn its lessons. It’s a tall, cool drink of truth.
Monday, January 27, 2014
At the first CASE conference specifically focused on community colleges, back in 2012, I heard a story that has stuck with me since. A fundraiser was lamenting an exchange he saw his president have with a potential donor. It went like this:
Donor: Is there anything you need help with?
President: (thoughtful pause) No, I think it’s a pretty well-run institution.
Fundraiser: (jaw drops to floor)
The fundraiser’s point was about the need to coach presidents in the art of the “ask.” That’s true, but I see it as a president getting caught up in the two stories that presidents of community and public colleges have to tell, and believe, simultaneously.
First, the “need” story. Public colleges -- and especially community colleges -- have been hit hard by a one step forward-two steps back cycle of appropriations for a long time. Most of them run quite lean, just because they have to. The “need” story is focused on deficits, and on the need to fill in very real and harmful gaps. If legislators don’t understand the “need” story, they’ll divert even more resources to other things. When your business model is built on subsidy -- which is a feature, not a bug -- then you’d better tend to the subsidy.
Second, the “success” story. For all the talk of outcomes assessment, scorecards, and the like, higher education remains a largely reputational industry. That’s true on the recruitment side, and it’s true on the donor side. Most donors prefer to feed and increase success, rather than to fill in gaps.
The impulse to feed success makes sense from a donor’s perspective. You want your gift to make a positive difference. That’s likelier to happen when the institution or program receiving the gift can show convincingly that it knows what it’s doing, and that it has a track record of success. Scholarships lend themselves to that concern, since it’s easy to point to the individual people for whom they made a difference.
Both stories are true, as far as they go. Community colleges generally need more (and more consistent) public funding than they receive, and even in those difficult circumstances, they make tremendous positive differences in the lives of many people. Every year brings new stories of student success, positive community involvement, and fresh budgetary challenges.
The trick for presidents is in knowing how to harmonize the two stories, and knowing which one to emphasize at a given time.
The easiest way to harmonize the stories is to use the “future conditional” tense. “We could be even more successful if…” That works even better when it has some sort of visible, concrete referent, whether it be a music studio, a planetarium, or a scholarship program. It acknowledges need, but places it in a larger context of success. Done well, it’s a wonderful -- and true -- tale with a clear moral. (Notably, this is the polar opposite of the “fall from the golden age” argument often wielded by critics.) It’s a variation on the classic “call to action,” because that’s essentially what it is.
The problem with the “call to action” is that it works best when it’s largely unencumbered by history. “Onward!” is a much more thrilling call than “Restore!” When the past is necessarily de-emphasized, it can be easy for the aggrieved to feel like the aggressors have been given a free pass, and sometimes there’s truth in that. The trick is in knowing when you can get a better result by choosing to exercise what Niebuhr called the “spiritual discipline against resentment.” Scoring debating points is fun, but securing resources actually helps.
It’s a difficult and complex balancing act. The president in the opening anecdote didn’t modulate well between “need” and “success,” let alone have a well-developed call to action that enveloped the former into the latter. He just went with “success,” and left a willing donor hanging. I understand the impulse -- it’s easy to get defensive -- but focusing only on one side or the other won’t work.
None of this is to say that people in other roles shouldn’t hit the “need” argument hard. It’s just to say that coming from a president, that argument would backfire catastrophically. The president in that opening anecdote leaned too far in one direction, to ill effect; leaning too far in the other would be even worse. Yes, we’re terrific, should go the response, and we could be even better with your help.
Sunday, January 26, 2014
“Can we make it count towards a degree?”
On the non-credit side, people ask that a lot. It’s a complicated issue.
Within the academic world, we tend to think of colleges as credit-granting institutions. And that’s certainly a significant part of what we do. But particularly in the community college world, non-credit instruction is also popular and well-developed.
Non-credit courses come in several flavors. “Adult Basic Education” (often called “ABE”) covers adult literacy and numeracy courses, as well as preparation for the GED (or whatever successor a given state has chosen). “Personal Enrichment” classes are aimed at adults who just want to learn something for the sake of learning it -- retirees who take pottery classes, say. But “workforce development” is the biggest and most popular subset. Those are geared towards either getting people into the workforce, or to helping incumbent workers maintain or upgrade their skills. Examples could include training modules on popular software packages, or customer service training.
Credit-bearing courses have relatively strict rules around the amount of work, the amount of time involved, the qualifications of the instructor, and the levels of academic rigor. Non-credit courses are much more flexible, since they stand or fall on their own merits. That doesn’t necessarily mean they’re less rigorous, but it frequently means they’re less comprehensive. If you already have a degree and just need to brush up on your Excel skills, a four-week non-credit class may be just the thing; going through another entire degree program would be overkill.
From an administrative perspective, the credit vs. non-credit divide is relatively clear. But from the outside, it isn’t. And for longer-term non-credit programs, we’re seeing increasing interest in getting academic credit awarded after the fact.
The idea is to make programs “stackable.” Stackability refers to the idea that if you take enough small things in the right order, they can (and should) add up to a big thing. If you learn material in a non-credit setting, you’ve still learned it; being forced to re-learn it in a credit bearing setting for it to “count” strikes many people as arbitrary, if not self-serving.
Here’s where things get tricky. And here’s also where many of us in the community college world see real appeal in a competency-based approach to awarding credit.
The cleanest way to award credit for non-credit work is through some sort of test or portfolio. In a few fields, colleges have long used CLEP tests to award credit for material learned elsewhere. And it’s not unheard of for particular courses to have locally-developed “challenge exams” by which prospective students can show that they’ve already mastered the material covered in a particular course. The numbers of students who tend to take these things aren’t terribly high, but they help satisfy the objection from basic fairness that says that a student shouldn’t have to pay for, and sit through, a course about material she already knows. (Now that most courses have designated “learning outcomes,” there’s a clear basis for an exam.) Portfolios can work similarly, at least in principle; if a student shows through her work that she’s at the level expected of people who’ve completed a class, there’s an argument for awarding credit.
When exams or other challenge methods exist, of course, it’s possible to recast non-credit offerings as a sort of exam prep. (We’ve done that explicitly for years with the GED.) If we went to a competency-based system for awarding credit, then nearly any non-credit instruction would be stackable, as long as there was some sort of similar credit-based content.
Some people on the credit side are wary of the concept of credit for non-credit, and it’s easy to understand why. If you measure student achievements, rather than faculty credentials, then faculty credentials may be devalued. And to the extent that the credit side relies on a clear differentiation between education and training, bridging the two can lead to a certain status anxiety.
Of course, as long as financial aid is available for one and not the other, many of these concerns are mostly moot. A few people may want to prove what they know so they can get a head start on a degree; honestly, I don’t see why they shouldn’t. But until financial aid recognizes the two as connected and treats them accordingly, I suspect the threat to the credit side is more theoretical than real. Until the money moves, most of the students won’t.
Thursday, January 23, 2014
Sometimes, it’s worth reading the whole thing. As they say on the Supreme Court: concur in part, dissent in part.
A consortium of seventeen colleges and universities has submitted a concept paper to the Department of Education, petitioning for “experimental site authority” for their campuses to keep financial aid eligibility while moving to competency-based education. (Hat-tip to Amy Laitinen, from the New America Foundation, for calling attention to it on Twitter.)
Yes, the buzzwords were strong in that one. Essentially, the colleges are asking for special permission to waive certain rules and regulations that normally govern financial aid eligibility. As written, the rules are largely about time: hours per week in class, semesters or quarters with quotas of credits earned in each (“satisfactory academic progress”), and the like. The colleges would rather move to “direct assessment” of student learning. The idea is to measure what they’ve learned, rather than how long it took them to learn it. Doing that requires new definitions of, or approaches to, “satisfactory academic progress,” continuous attendance, and so forth.
The idea behind moving to competency-based measurements makes sense. On an intuitive level, measuring learning is more to the point than measuring time spent trying. If you’re a quick study, it’s hard to justify making you sit and wait for others to catch up. And if we’re ever going to get a serious handle on budgets, we’re going to have to get Baumol’s Cost Disease under control. By definition, that cannot happen as long as we measure our product in units of time. Competency-based measures allow the possibility of finally achieving actual productivity gains, using the term ‘productivity’ in the Econ 101 sense.
All of that said, though, the concept paper is well worth a close read for the odd little asides and compromises it includes. It’s claiming both more and less than it should. Some highlights:
- The report correctly notes that while colleges now are theoretically allowed to use “direct assessment,” many of the regulations in Title IV are written in ways that prevent it for all practical purposes. Moreover, colleges are required to be either entirely time-based or entirely competency-based; they can’t mix and match. In practical terms, that means that a college that wanted to try a competency-based approach would have to redo every single back-office function before starting. (This is probably why SNHU had to establish an entirely separate division, College for America, before making the leap.) That’s a hell of a commitment for an institution with students currently enrolled. The report asks for the rules to be rewritten to allow colleges to try a competency-based approach in individual programs, rather than across the board. This strikes me as an excellent idea. Get some proof of concept before trying to renegotiate faculty union contracts, for example.
- Since it’s difficult to measure “satisfactory academic progress” in a given semester if semesters don’t exist, the report offers alternatives. The first, which seems reasonable to me, is to look at time in the context of overall degree completion, and to pro-rate the number of competencies (with some wiggle room) so that a student needs to be on pace to finish within 150 percent of normative degree time. The second is radical, and I can’t decide if it’s brilliant or preposterous. It’s to pay out financial aid only after competencies have been demonstrated. The idea there is that if aid is disbursed only when students are actually learning something, then the amount of time it takes them to learn it becomes irrelevant.
Yes and no. Yes, it solves the SAP problem with admirable elegance. But no, it doesn’t solve the cost problem. If aid isn’t disbursed until after the fact, how are the upfront costs covered? Colleges incur costs from day one, and asking them to “eat the cost” of that first semester-equivalent isn’t terribly realistic. Many colleges are running close to the bone as it is.
The “learn now, pay later” model is also analytically distinct from a competency-based approach. In theory, we could apply “learn now, pay later” to traditional semesters. But we don’t, and there’s an obvious reason for that. That obvious reason would still hold under the alternative approach.
The paper further suggests allowing Title IV student aid only for “direct costs,” such as tuition and books, in the name of reducing student borrowing. (Current rules allow for some estimation of living expenses, as well.) The merits of that suggestion are arguable -- I’d argue that moving living expenses off-books doesn’t make them go away, and would instead just drive students to private lenders -- but more importantly, it’s separable from the competency-based vision. The only reason I can imagine that it’s included is to run up the score.
Of course, “direct costs” are harder to figure when each student moves at a different pace. If a college charges by the competency -- similar to charging by the credit, now -- then there’s a question to be raised about students who show up already testing out of some. Do you charge them for testing out? (To be fair, CLEP exam fees aren’t covered by financial aid now.) I prefer the subscription model that CfA and some others use, in which you pay x dollars for y amount of time, and use an “all you can eat” approach during that time. That gets around the “where did you learn that?” problem cleanly, although it reintroduces time-based measures through the back door.
The weirdest moment in the report came on page 20. I can’t really do it justice in paraphrase, so here it is in all its glory:
Federal aid policies discourage students in competency-based degree programs from taking courses offered in the traditional credit-hour format. Some students in competency-based programs might do better in traditional credit-based courses in certain subject areas. For example, students who struggle with mathematics might thrive in credit-based courses in which there is significantly more direct contact with instructors. Allowing students to choose the instructional or learning modality that best suits their learning styles could reduce the amount of repeated course-taking and also shorten time to degree and save money. (emphasis added)
Wow. That one contains multitudes.
From a community college perspective, my first thought is that students who struggle with math aren’t the exception. They’re the majority. If a competency-based approach disadvantages those students, then it’s inappropriate for us.
Why would it disadvantage them? Apparently, a competency-based approach features “significantly” less direct student contact with instructors.
I’m not sure why that has to be true. But if it does, it’s pretty damning.
The assertion flies in the face of what I’ve seen on the ground. This semester we’re rolling out our new self-paced developmental math course, which features both faculty and on-site tutors to help students as they confront new (or persistent) hurdles. We’re hoping to recoup the extra cost through improved student completion over time. It may or may not work, but it’s worth trying. (We’re working with developmental because the “transfer” issue is off the table at that level.) It’s not a pure competency-based approach, since the course is still scheduled into semesters, but a student could conceivably get through two or three semesters’ worth of material in one.
The report mentions in passing some dramatic changes to the faculty role. On page 11, it notes matter-of-factly that “[s]ome institutions separate subject matter-expert faculty who design programs and assessments from student-mentor faculty, who serve as the primary contacts with students. In addition, some programs have additional student supports and faculty who solely handle grading and assessments.”
The faculty role gets unbundled, with a production model that winds up looking very much like my graduate program did back in the 90’s. And that model didn’t require a competency-based approach. In this model, t.a.’s have a different name, but are otherwise recognizable. In the new model, most faculty would regress (or be regressed) to the t.a. duties they performed in grad school.
I understand the usefulness of presenting a concept in its purest form; you can always take less than you’re granted, but you can’t take more. Might as well ask for it all upfront, and make the necessary compromises later. I get that. And if that’s what this is -- an opening salvo -- then many of my concerns are moot.
But it also raises hackles that don’t necessarily need to be raised. A competency-based approach could conceivably work in a number of different ways.
The paper is also quiet on some of the concerns that led to a new focus on seat time over the last ten years or so. In the 2000’s, a number of for-profits took some pretty implausible liberties with seat time and credits in order to maximize student loan income while minimizing labor costs. Requiring a minimum number of hours of seat time may be asinine in certain ways, but it’s also, at least in part, a response to some very real abuses. It’s not clear from this paper what would prevent a recrudescence of those abuses.
I offer these thoughts as a sympathetic critic, as one who wants the idea to work. It will work best if it’s less theoretically stringent and more cognizant of facts on the ground. I hope the experimental site authority gets approved, precisely so the idea could be refined through contact with those facts.
Wednesday, January 22, 2014
Hope lives in the cracks.
This week I’ve been awash in data, from various sources. On campus, we had our first real “Data Day,” in which we made actual posters of all manner of IR data and shared it with the entire faculty and staff. The idea was to provide a common factual base for on-campus discussions of policy, innovations, and planning. I don’t know if everybody “got” the subtext, but I did see some folks lingering at particular posters for extended periods, pointing at individual numbers and talking to each other. To the extent that we can replace hunch or anecdote with fact, I have to believe we’ll be better off, even if some of the facts weren’t terribly encouraging in themselves.
Earlier this week, the Chronicle published one of the more disturbing data-driven pieces I’ve seen in a while. It was about the number of 18 year olds projected to live in various areas over the next fourteen years. You can enter the name of a county, and it will show you the number of students projected to reach traditional college-going age for the next fourteen years. The larger narrative was about the decline of “prime suspects” for enrollment at many traditional colleges in most parts of the country, including my neck of the woods.
The Chronicle piece is easy to nitpick in its particulars, but hard to ignore in its sweep. Yes, people move between the ages of, say, four and eighteen, so you can’t just take the number of four year olds now and assume it’ll be the number of eighteen year olds in fourteen years. Yes, community colleges in particular draw on a much broader pool of potential students than just eighteen year olds. And to the extent that we’re able to make progress on reducing racial disparities in college enrollment and graduation, the impact of some of the projected changes on colleges may be mitigated. That’’s a case in which we can do well -- better than we otherwise would, anyway -- by doing good. It’s lovely when incentives align like that.
All of that said, though, the spectre of colleges competing against each other for a steadily declining pool of students is vivid, plausible, and ugly. It does not bode well for the folks who work in the sector. Higher education is structured to grow. Even stasis presents a strain. Sustained, long-term shrinkage will not be pretty.
(Admittedly, I’m reflecting my region. Some parts of the country are growing much more quickly, and are still lightly served by colleges. I’m focused here on the Northeast, especially outside of the tentpole cities.)
Given dreary data, it’s tempting and understandable to want to look away. Transparency can reveal uncomfortable things. The prospect of a clearer look at just how difficult things will be can be kind of depressing.
Here’s where I have to fall back on a couple of core convictions.
The first is that it’s better to know than not to know. If we have a sense of the demographic waves to come, we can strategize. If we don’t, we can’t do much more than guess. I’d rather have a hand in shaping the future than just give up and get buffeted by the fates. That’s true whether we’re talking about academic planning, enrollment, finances, or whatever else.
And the second, which stands in some tension with the first, is that even when we know, we know less than we think we do. Hope lives in the cracks.
I don’t know what the next hot industries will be, or where the next breakthrough will occur. I’ve lived through enough hot-cold-hot-cold cycles with various industries at this point to have a healthy skepticism of long-term projections. Some changes are linear, and some are more like dams breaking -- nothing but creaking for a long time, and then all at once. And sometimes the changes that initially look like threats turn out to be opportunities.
So yes, it’s helpful to see the data. It’s even more helpful when it’s taken as advisory to action, rather than as gospel. Even when we have to swallow hard first.
Tuesday, January 21, 2014
“[Gov.] Beshear proposes budget cuts and building projects for Kentucky universities” -- Headline in Lexington Herald-Leader (hat-tip to Lee Skallerup Bessette for highlighting it)
Readers of a certain age will remember the neutron bomb. It was billed as a breakthrough in warfare back in the 70’s. Its selling point was that it would kill people by the thousands, but leave buildings standing. That way, the eventual winner of the war would be able to take possession of the assets of the vanquished, instead of the smoldering (or glowing) piles of rubble left by more traditional weapons. I don’t know whatever became of it, but I remember it making quite an impression.
People in higher education have become familiar with neutron budgets, of which Kentucky’s is the latest. It’s a budget in which colleges get money for buildings while, at the same time, taking cuts in the money to pay people to work in those buildings. From the perspective of folks on the ground, neutron budgets are both devastating and demoralizing. But from the perspective of a state legislature, neutron budgets can actually make a kind of sense.
(Shameless plug alert: I addressed this in chapter two of my book.)
The politics and accounting around buildings are very different from the politics and accounting around people.
The accounting is the easier one. Buildings are considered “assets.” People are considered “labor costs.” Put differently, buildings are credits and people are debits.
That’s not as silly as it sounds. Buildings can be rented out or sold. On my own campus, and we’re not unusual in this, one small revenue stream comes from renting out space for meetings and conferences. In that instance, counting rentable space as an asset makes sense.
Buildings also provide fundraising opportunities, particularly in naming. The major cost of a building is upfront; once it’s built, the maintenance cost is far lower than the construction cost, especially in the first decade or so. Buildings provide their own collateral, making it easier to borrow money to pay for them. (That isn’t necessarily true in Massachusetts, for complicated reasons, but it holds in most places.) And in the case of new construction, contracts for construction are major political plums. A cynical sort might suggest that that’s why it’s easier to get money for new construction than for deferred maintenance.
People, on the other hand, are recurring costs that just get more expensive over time. And the revenue sources that pay for people are fewer, and more contested, than the revenue sources that pay for buildings. If you float bonds to pay for a building, you’re investing. If you float bonds to make payroll, you’re sending a signal that you’re insolvent or are about to be. People are not collateral.
Research universities and very high-end liberal arts colleges often have endowed chairs, in which philanthropy is used to pay salaries. Most community colleges and less selective four-year public colleges don’t. Here, salaries have to come from operating funds (and sometimes from grants, or what we call “soft money”). Grant-funded positions have expiration dates, which means that it’s impractical to use grant funding to pay for tenure-line faculty. Those positions have to come from college operating budgets, along with everything else. And those are the budgets that tend to get hit by annual cuts.
On the ground, none of that matters. It can be maddening to have a gleaming new facility and nobody to teach there. The in-your-face juxtaposition of an increasingly adjunct faculty with a shiny new building is hard to ignore. But neutron budgets have a logic of their own. Making the choice to overcome that logic and do what needs to be done on the ground requires conscious acts of political leadership. Defaulting to the logic of existing systems results in proposals like the one in Kentucky.
The neutron bomb may have gone the way of the leisure suit, but the neutron budget is alive and well. It isn’t inevitable, but it’s the path of least resistance. My condolences to Kentucky.