Sunday, September 30, 2018

“Fix Systems, Not People”


I haven’t been able to attend this year’s #RealCollege conference, but I’ve been following it on Twitter, and I was struck by a line there delivered by DeRionne Pollard, the President of Montgomery College.  She implored reformers to “fix systems, not people.”

She’s right, and it sounds easy.  It isn’t.

You’d think that a focus on systems, rather than people, would be an easy sell.  A focus on systems suggests that many of the issues a college faces can be solved by the people already there.  It allows for the acknowledgement that most of those people are hardworking, well-meaning, and professional. In a sense, it lets incumbents off the hook.  You’d think that would be popular.

Often, though, those discussions fall flat.  They fall flat for a few interconnected reasons.

The most basic, and frustrating, is the inability to see those systems in the first place.  “Why can’t students get to class on time?” Well, why assume that all students have reliable cars?  “I can’t help it if students have complicated lives.” Partly true, but we can make them a little less complicated by replacing expensive textbooks with OER, following the principles of Universal Design for Learning, and recognizing the academic calendar as a human construct that can be reworked in other ways.  Many of our basic operations are predicated on the assumptions that students are well-prepared, live at home with families that support them economically, have reliable cars, don’t work many hours for pay, know what they want, and can devote themselves full-time to their studies if they’d just buckle down.

With the students who fit that profile, we have spectacular success rates.  But those aren’t most of our students. That doesn’t make our students defective.  It means we need to be willing to rethink some of our basic assumptions.

Relatedly, many people use concepts that fit individuals to explain organizations.  But the two are fundamentally different. I’m old enough to refer to this as the sociological imagination, but it  goes by other names, too. I think of it as the difference between psychology and sociology.

The unit of analysis in psychology, generally, is the individual person.  The unit of analysis in sociology is a group of people. (Yes, social psychology sits in between, but the basic point holds.)  Groups aren’t simply the sums of their parts; they take on dynamics of their own. Add organizational and political dynamics to that, and analyses that start with individuals can go badly wrong, even with good intentions.

Take professional development, for instance.  When the funding for it exists, it’s often understood to mean conference travel for individual faculty within their own disciplines.  That’s obviously important, given that faculty need to be current in what they’re teaching. But it’s also only one piece of the puzzle.  I’ve had conversations with engaged, intelligent people who honestly don’t see why professional development would be any more than that.

I’m trying to push the idea of sending groups -- four or five at a time -- to conferences that address community colleges as institutions, like the League for Innovation, Moving the Needle, and the AACC.  It’s a harder sell that I would have expected. Some of the ideas floating around those places require getting other people to change what they’re doing. Depending on local context and the way those ideas are framed, they can come off as insulting, even though they’re actually suggesting that the same people can get better results.  Properly understood, they’re empowering. But that involves a willingness to take a leap.

Dr. Pollard is right, but the challenge is much greater than a quick line suggests.  I hope that the folks at #RealCollege do more than just appreciate a good line when they hear one.  Putting it into practice, as difficult as that is, is probably our best hope.


Thursday, September 27, 2018

Friday Fragments


In light of the Kavanaugh/Ford hearing, it seems like a good time to revisit this piece from 2012.  The short version is that gender studies is actually one of the most useful and practical things you can take.  The Senate, and the country, would be infinitely better off if senators were more aware of some basics.

--

IHE had a good piece yesterday on student success courses and initiatives at two-year colleges, but it left out two key factors limiting the spread of success courses: transferability and credit limits.

Many four-year colleges that teach their own freshman seminars don’t take community college freshman seminars in transfer.  That creates a moral dilemma for community colleges, where we want students to succeed, but we also don’t want to make them pay for credits that won’t go with them.

Recent moves to put hard caps on the number of credits in a degree make matters worse.  New Jersey just passed a 60 credit cap for associate degrees, which will require cutting classes out of existing programs.  That’s bad enough in itself, but it makes the introduction of -- or beefing up of -- success courses that much harder. When credits become a zero-sum game, the academic politics get much nastier.

In principle, this should be easy to fix, but it can’t be done from here.  

Here’s an empirical question for folks with access to more data than I do.  Which is more likely to increase retention and graduation: a 60 credit program without a success course, or a 63 credit program that includes a 3 credit success course?  If the data suggest that it’s the latter, then some pretty obvious reforms suggest themselves. Does anyone know?

--

Yet another “does anybody know?”  question. Like most places, we have software packages for training employees on FERPA compliance, Title IX compliance, and the like.  But we recently went looking for something similar on working with students with disabilities, and found almost nothing.

Is there anything good out there?  Roughly ten percent of our students have documented disabilities, and there may be more who don’t have documentation.  We’re committed to giving everybody a fair shot, and treating everybody with respect. And it’s a legal issue, as well as a moral one; the ADAA is very real.  

Surely there’s something good out there, yes?

--

This week I turned fifty..

Fifty kind of sneaks up on you.  There’s an old line that forty is the old age of youth, and fifty is the youth of old age.  I can see it. For years, I was accustomed to being the youngest person in the room at meetings.  That almost never happens anymore. Sometimes I’m even the oldest. That was a shock the first few times; now, not so much.  

The kids, of course, are convinced that I can be carbon-dated to the early Paleolithic.  That’s as it should be. They’re in touch with elements of popular culture that I can’t even describe, let alone form a coherent thought about.  A few months ago, tired of playing the same old stuff on Spotify all the time, I asked The Boy to recommend something. He recommended Kendrick Lamar.  I got about two minutes in before I had to stop. It felt like eavesdropping on somebody else’s conversation. Somehow, I don’t think “middle-aged suburban white guy” is Lamar’s target demographic, and that’s okay.  I trust TB in his judgment that Lamar’s work stands out in its genre; I just don’t get the genre. But I’m also old enough that it doesn’t bother me.

I had hoped that age would bring wisdom, preferably in the form of oracular aphorisms to be wielded as necessary.  Sadly, no. The closest I’ve come to anything like wisdom is not getting caught up in as much stupid crap. Instead of pronouncements about the nature of existence, the best I can claim is a more finely-tuned BS meter.  That also entails humility, as we all have our own BS, and it gets harder to deny as experience accumulates. Combine a finer-tuned BS meter with gradually increased self-awareness, and it’s hard to get quite as indignant about things.  There’s more of a sense of limits.

The opposite is true at work.  There, a sense of the finitude of time brings more urgency to the work, and a finer-tuned meter makes it easier to sort excuses from reasons.  As much, or more, of my career is behind me as is ahead of me. The marks I want to make need to be made sooner, rather than later. Whatever happens with titles or ranks, I want the places I’ve worked to be better -- fairer for students -- for my having worked there.  In a setting marked by long-term financial declines, that’s a taller order than it may seem. At least I have a much better sense now of what those marks should be. The soundtrack may be getting a bit stale, but the mission is clearer than it has ever been.




Wednesday, September 26, 2018

A Fearless Prediction


There will be another recession.  

That matters for all of the human reasons that recessions matter -- people losing jobs, losing homes, living under a gnawing fear that ages them quickly.  But it also matters for higher ed policy.

Wednesday’s piece about the different permutations of “free community college” in various states, including my own, noted that several of the proposals were able to gain political traction by making the criteria so narrow that very few students actually qualified.  That keeps the cost down. During relatively flush times, when tax revenues are up and community college enrollments are down, it’s easier than usual to push for some version of free community college. Versions that don’t cost much are easy to fold into large budgets when revenues are strong.

But a recession will come.  I don’t know exactly when, but it will.  They always do. And if history is any guide, the next recession will reduce tax revenues to states, while simultaneously increasing enrollments at community colleges.  In other words, it will put severe economic and political pressure on free community college programs. They will become much more expensive at exactly the moment when it will be harder to cover the cost.  I wouldn’t be surprised to see many of them either shrink or fade away entirely if they remain in their current form.

The time to fix the roof is when the sun is shining.  The time to recession-proof a social program is when the economy is strong and tax revenues are healthy.  Advocates of free community college should be taking pains now to ensure that the programs aren’t eviscerated the next time the economy has to catch its breath.  Because it will.

One way to do that is to move much of the funding stream from the states and counties to the Feds.  That would help because the Feds can deficit-spend; most states and counties can’t. That means that the Feds can, if they choose, deliver a Keynesian counter-cyclical spending boost when things go bad.  Education is a great vehicle for that, in part because it puts people in better positions to thrive during the subsequent recovery.

The catch, obviously, is that the Federal government isn’t necessarily any wiser than the states.  And political winds shift, so a sympathetic administration can easily be followed by a hostile one.           

The Feds already supply some counter-cyclical boost through Pell grants and subsidized loans.  But they tend not to do operating aid, so the only way for colleges to capture more of the Pell and loan money is to raise tuition.  During recessions, that’s politically toxic, and it sends exactly the wrong message.

American political culture is deeply skeptical of anything it perceives as a handout, but it’s much more accepting of what it perceives as an earned benefit.  If free community college (or whatever variation on it) can be structured to come across as an earned benefit, it’s much likelier to survive the next recession, regardless of who is in office.  For proof, just compare the political vulnerability of “welfare” to the invincibility of Social Security. They’re both transfer payments, but the latter is perceived, rightly or wrongly, as earned.  That makes it much harder to take away.

Community service requirements are one possible way to do that, as Tennessee has done.  

If that’s not enough, or if the oversight bureaucracy is too much, there’s also the option of doing what Marion Tech did in Ohio, and making the second year free, contingent on a good GPA in the first year.  In that model, “skin in the game” isn’t in the form of debt or money; it’s in the form of demonstrated academic performance. Students have to earn the benefit. Try taking it away from students who have earned it fair and square, and prepare for serious blowback.

I’ll admit some bias on that one, but I like it a lot.  It sends the right message to students about persistence, it rewards desired behavior, and it allocates scarce resources towards increased retention and graduation.  It even leaves room in the first year for private philanthropy, or dual enrollment credits.

Whatever the method, though, if we rely on the kindness of enlightened legislators during economic expansions when enrollments are low, we’re in for a rude surprise when the next recession hits.  This is the time to fix the roof.

Tuesday, September 25, 2018

Designing for Pushback


This one is radioactive.  Without getting too specific, for obvious reasons, I’ll try to show why.

Apparently the University of Wisconsin system is developing a protocol by which colleges who fired employees for misconduct, or who were preparing to do so when the employees resigned, can reveal that when called by prospective employers to check references.  The IHE article outlines some of the mechanics of the process; the short version is that referees with a sense that something was amiss are instructed to refer callers to HR central. HR central will do the reveal.

The article correctly notes that there’s a widespread fear of litigation around disclosure of charges, particularly if they were never settled.  That often leads to new employers unknowingly hiring folks who’ve engaged in misconduct before. The process is meant to offer a way for, say, Hypothetical State U to inform North Wherever State U that Professor So-and-so abused his authority with students.  NWSU, in theory, can weigh that information in the hiring process.

It makes a world of sense until you start anticipating counter-responses.  

The issue the policy is attempting to solve is real.  Hiring someone who turns out to be a nightmare places vulnerable people, and institutions, in danger.  In the current academic job market, it also denies scarce opportunities to good people.

Referees and callers have even developed a code to say things without saying them.  It goes like this:

Caller: How would you describe So-and-so as an employee?

Called: I can only confirm dates of employment.

That’s usually understood to mean “I wouldn’t hire him if my life depended on it.”  But some litigation-shy places have defaulted to “confirm dates of employment” for everyone, good or bad, thereby corrupting the code.  

In practice, though, I could see disclosures getting complicated.

What happens when the investigation was never completed?  The policy states that investigations should be completed even after an employee resigns.  But what if the call comes before it’s completed? Say that someone is accused in February and resigns in March.  The employer gets a reference call in April, but the investigation won’t be complete until June. The attorney for the employee would argue that disclosure to the caller in April would be prejudicial, and would do material harm to the former employee.  And I could easily imagine a due process claim brought by a former employee saying that his inability to defend himself after leaving resulted in a kangaroo court.

In the age of #MeToo, it’s easy to imagine that this issue is about rape or sexual assault.  But those are crimes; there’s an entire criminal justice system whose job it is to prosecute those.  (It’s imperfect in many ways, heaven knows, but it exists.) The issues likelier to fall into this category aren’t crimes per se, but are violations of employee codes of ethics or conduct.  This might be the employee who keeps calling women derogatory names, and/or habitually uses racial slurs. Those are fireable offenses, but they aren’t crimes. Asking applicants whether they’ve been convicted of felonies won’t catch those.

To the extent that employers are permitted to share more information, I would expect to see much more preemptive pushback, both from the accused and from their unions.  If employers are permitted to disclose that investigations are under way, I envision due-process counterarguments. If they are permitted to disclose that “credible evidence” exists -- the standard proposed in California’s proposed bill -- I see pushback on the determination of credibility prior to the conclusion of the investigation.  Given tight academic job markets, it strains credulity to argue that it’s on the prospective employer to weigh the facts. The prospective employer presumably wouldn’t even have access to the facts. Likely, it would just default to ‘no.’ At that point, the accused has been punished without even a finding of actual guilt.

Colleges fearing the cost of extended litigation -- even if they’re pretty confident that they’d win, eventually -- may make the calculation that it’s easier just not to push.  At that point, we’re right back where we started.

The Wisconsin law does put in some protections for those who disclose; that’s definitely helpful.  And the problem it’s trying to solve is real, and urgent. Honestly, I don’t have a better alternative at hand; I’ve actually been in situations where I was frustrated by the gap between what I knew about someone and what I could prove.  But I’m pretty confident that the first few folks accused through this new system will push back, hard, and along predictable lines. I’d love to see Wisconsin find workarounds for those so we can protect our employees and students without walking into a legal buzz saw.  If it can be designed for pushback, it might just work.

Monday, September 24, 2018

Successful Reverse Transfer


This one is a question for practitioners, so it may be a bit wonky.  Consider yourself warned.

My state recently passed a law mandating that public two-year and four-year institutions have “reverse transfer” agreements, through which students who transfer “upward” prior to graduation can have some credits transferred back to the two-year school to finish the associate’s degree.  

The idea is twofold.  First, it gives the student a fallback option if life happens during the junior or senior year and she has to step away.  It’s better to leave as an associate degree grad than as a dropout. Second, it gives two-year colleges the credit they’re due.  As long as we must be measured by graduation rates, let’s at least let the graduation rates reflect students who got what they wanted and went on.

But I’ve never heard of students pursuing the reverse transfer option in any significant numbers.  At a recent visit to a four-year public college to discuss reverse transfer, we got statistics on how our students have fared there.  They’ve outperformed “native” students. One of us asked how many students have even tried to “reverse transfer” credits. The grand total so far is zero.

That’s one pair of schools, and the rule is new, so I don’t want to jump to conclusions.  But I have to admit, I’ve never heard of reverse transfer of credits (as opposed to the reverse transfer of students, in which a student who starts at a four-year leaves and starts over at a cc) happening on any significant scale.  A student here, a student there, but that’s it, and even those are rare.

That could be a function of several factors.  Students may not know about it. Or, the implementation may be more trouble than it’s worth.  Or, they simply might not see the point.

If it’s the first, we can fix that (to some extent) with an awareness campaign.  If it’s the second, we can look at best practices and try to streamline our own processes, maybe even running some sample students through it to see how it goes.  But if it’s the third, I’m not really sure what to do with that.

So this is where I turn to my wise and worldly readers.

Has anyone ever seen a reverse transfer of credits system attract a significant number of students?  If so, what made it work? Is there anything that falls under “if we knew then what we know now?”

Sunday, September 23, 2018

The Best Facts


The best facts, as a writer, are the ones that contradict a widely-held bit of dogma.  They call into question things that you didn’t know were questionable. (I’ve heard the same said of science.  The breakthrough moments aren’t marked by someone proclaiming “Eureka!” They’re when someone notices a result, raises an eyebrow, and mutters “That’s weird.”)  A recent Washington Post column offered a great fact.

It was about the relative predictive value of standardized test scores, as opposed to grades.  

That narrative usually goes in one of a few ways.  There’s the “standardized tests are evil” line, which is well-worn in academic circles.  It typically points to the low predictive value of test scores for anything other than subsequent test scores.  It also often points to cultural bias in tests, and/or to racial or economic gaps in scores. Counter to that is the “level playing field” line, which holds that for all of the flaws of standardized tests, they at least allow students from schools that aren’t as well known to show what they can do.  More recently, the “multi-factor placement” line has become popular. It holds that some combination of the two is more effective than either by itself. It comes in “inclusive” and “exclusive” flavors. The “inclusive” flavor says that a student who passes any one of several factors should be allowed to bypass remediation; the idea is to encourage more students to take college-level courses upfront, the better to improve graduation rates.  The “exclusive” one is the version used by exclusive colleges, in which they look ten different ways to find flaws.

The article, which draws on a study by Seth Gershenson, suggests that the “inclusive” version of multi-factor placement may fall victim to an insidious flaw: grade inflation in affluent schools.

When I’ve talked about using high school GPA for course placement, as opposed to Accuplacer scores, one of the usual objections I hear is that a “B+” from some districts is not the same as a “B+” from others.  The assumption is typically that the lower-income schools have easier grading curves, so the students towards the top in a lower-income school may not be at the same level as one towards the top of a higher-income school.  

This article and study suggest that it’s the other way around.  Grade inflation is more pronounced at more affluent schools.

That’s actually consistent with studies of grade inflation in college.  Grade inflation may be rampant at Ivies and similar places, but it’s virtually unknown among community colleges.  The reasons may be similar. In both cases, socially-driven senses of entitlement, sometimes combined with real or threatened parental action, can nudge grades upward.  Where those are less common, so is grade inflation.

This may be where the “standardized test as level playing field” argument gains some traction.  They can provide a reality check on the seemingly-stratospheric GPA’s of students from more affluent places.  If one were cynically inclined, one might also wonder if this is part of why some elite prep schools are dropping AP classes.  A reality check can’t help elites; it can only hurt them. Chad’s parents may be able to nudge the B to a B+, but they can’t nudge a 4 to a 5.  What would Legacy Prep gain by taking the risk?

The inclusive version of multi-factor placement assumes that past grades are a reliable guide to future grades.  It makes sense only as long as the two institutions giving grades are playing by the same set of rules. But it seems that some of the more affluent ones may be bending the rules, whether consciously or not.  A reality check may be in order.

I like this fact a lot, because it flips an existing narrative on its head.  The best facts do.

Thursday, September 20, 2018

9.6 Billion


Harvard, undergraduate population of approximately 6,800, just completed a five-year capital campaign that generated $9.6 billion.  

Brookdale, undergraduate population of approximately 11,800, has an annual operating budget of about $84 million.

Readers, it’s time for some math.  For the sake of argument, let’s pretend not to notice that capital and operating budgets are not the same thing.  I’ll use relatively round numbers.

One percent of 9.6 billion is 96 million.  Ten-year Treasury bills are running at about 3 percent right now.  (Score one for Marketplace!) So if that money were invested in ten-year treasuries -- a thought that would make any self-respecting fund manager shudder at such risk aversion - it would return about $288 million per year.  I’d say that’s before taxes, but, of course, Harvard is tax exempt. The $288 million is real. (They’d almost definitely go higher-risk, higher-reward, but I’m trying to keep it simple.)

Brookdale could increase its operating budget by more than half, and still consume only about half of the annual interest thrown off by the principal.  And it could go entirely tuition-free and fee-free. It could hire full-time faculty and staff, pay for professional development, support shuttles to the various campuses, and do it all without charging students.  Hell, it could even stop taking subsidies from the state and county. Enrollment would likely increase, but that would be fine; we’d have the employees to handle it.

Again, this is just from the interest.  There would be enough interest left over to cover one or two of our sister colleges, too.  Indefinitely. Assuming enrollment increases here and at a sister college, we’re looking at making college free for about 25,000 students per year, plus improving the work lives of hundreds of employees.  The community payoff would be dramatic, lasting, and compounding.

American political culture holds that 9.6 billion for Harvard is “philanthropy,” but free community college is “socialism.”  There’s something fundamentally wrong with that.

Instead, we hope against hope to get the first real increase in operating funding since the 1990’s, while Harvard has to figure out what to do with the latest billions.

As a former political scientist, I’ve been fascinated to see the concept of socialism catching on among younger voters.  I’m old enough to remember when the word was an epithet. Very Smart People have pronounced themselves perplexed at its emergence.

Ask a community college student who sleeps in her car between part-time jobs what she thinks about Harvard’s tax-free $9.6 billion windfall, and whether she could come up with any better uses for it.  

If we don’t want folks to go off the deep end politically, we need to stop pushing.  

Congratulations to Harvard for playing the game exceptionally well.  Now it’s time to change the game.

Wednesday, September 19, 2018

A Blended Learning Community


I’m guessing someone out there has tried this, but I haven’t actually seen it.  Any wise and worldly readers with knowledge of it are invited to share. What if we applied the blended format to learning communities?

“Blended” courses are the new version of “hybrid.”  They replace some, but not all, of the seat time of a traditional class with online activity.  For example, a hybrid chemistry class might do the “lecture” part online, while doing labs in labs.  Hybrids tend to have the most success with learning outcomes of any format, since they can draw on the best of both worlds, but they can be a tough sell to students.

Learning communities take many forms, but the simplest -- and most common in my experience - involves two courses in different disciplines sort of teaming up.  For example, a writing class might pair up with a criminal justice class, so the writing would have a theme. The same students would be in both sections. Ideally, the professors would coordinate in advance so the assignments would play off of each other.  (One of my favorites, at Holyoke, was a combination of environmental science and literature focused on “cli-fi,” or science fiction about climate change.) Learning communities are commonly cited in the literature on “high-impact practices” for increasing student engagement in their education.  

What if we combined the two?  In other words, what if we did a learning community that combined a classroom class with an online class?  The “whole” would be blended, even if each individual part wasn’t.

It seems too obvious for someone not to have tried it, but I haven’t seen it done.

In theory, it could solve a couple of problems.  The most obvious is logistical. A learning community only works if all of the students can take the same two sections of the same two classes.  For student bodies as diverse and heavily employed as community college students, that can be a tall order. I’ve seen many learning communities struggle for enrollment, just because too few students can conform to the schedule.  But if one of the two classes is online, then the logistical challenge has been largely eliminated.

It could also help with the “bonding” aspect that is sometimes missing from online classes.  If the students see each other on a regular basis in the onsite class, then the rapport there and the interaction online can reinforce each other.  Discussions could carry over, and assignments could be allocated based on the format in which they make the most sense.

Finally, it could address the “green eggs and ham” problem that blended classes often face.  Students generally enroll in blended classes last, or reluctantly, or not at all. But the few who do, tend to have good experiences.  If a learning community becomes a gateway blended class, it could help dissipate student fear of the format and allow other blended classes to thrive.  Getting them to take that first taste of green eggs and ham takes some doing, but once they discover that it’s good, you’re home free.

Has anyone out there tried this, or seen it done?  Did it work? Or is there a hitch in the idea that isn’t obvious from here, but painfully clear when tried?  



Tuesday, September 18, 2018

Stackability Beyond the Bachelor’s


I’ve had issues with reports from Third Way in the past, so I approached the latest one, by Douglas Webber, warily.  It’s about the lifetime economic returns of a bachelor’s degree. It’s relatively thoughtful, and it wins points from me for noting that the real issue with student debt isn’t the amount of debt that students carry, but whether they complete the degree or not.  (That’s why the sizes of outstanding balances are _inversely_ correlated with repayment rates. Someone who dropped out after a semester or two is much less likely to repay loans than someone who graduated, even if the graduate borrowed more.) But a key omission jumped off the screen:

“For data availability reasons, I only examine the returns to a Bachelor’s degree for individuals who did not attend graduate school.”

Hmm.  I don’t know about the availability of the data -- I’ll defer to experts on that -- but I’d bet good money that the average salary among those who went on to, say, medical school or business school would boost the overall numbers.  Leaving them out distorts the picture.

The omission points to a frustration I’ve had for years with data about earnings and degrees.  Degrees can stack.

In the community college world, we speak the language of “stackable credentials” all the time.  It usually refers to certificates that can count towards an associate’s degree. For example, we include ServSafe certification in the Culinary degree.  Fields like nursing (LPN to RN), IT, and Automotive Tech lend themselves to stacking. The primary benefit of stacking is that a student can get a foot-in-the-door credential -- and therefore start earning money -- on the way to the degree.  For many students, as Webber’s Temple University colleague Sara Goldrick-Rab reminds us, that’s a necessity. Stackability can also offer working adults with field experience a head start, which saves time and money, and can be motivating.

But I seldom hear the word used for the four-year degree and above, even though it applies at least as well there.  An associate degree can lead to a bachelor’s, which can lead to a master’s, a doctorate, an M.D., a J.D., or all manner of other things.  (Yes, I know, MD’s and even JD’s are technically doctorates, but I’m deferring to common usage here.) Excluding the folks who go on to higher -- and often more lucrative -- degrees skews the sample.  

This may sound like a quibble, but it isn’t.  Nearly half of the bachelor’s degree grads in the US have significant numbers of community college credits in their transcripts.  That represents a major economic contribution for which community colleges get little or no credit.

An associate degree that feeds into a bachelor’s probably has a better return on investment than a traditional bachelor’s, since the upfront cost is lower.  On the average, I’d bet that the average earnings of that group would also be higher than those who just stopped at the associate’s level. Leaving them out of the analysis is misleading.

Webber’s piece argues for better data, and on that, I wholeheartedly agree.  The politics of getting that data are daunting, but the usefulness of it (more accurately, because the usefulness of it) is clear.  We can’t give credit where credit is due if we forget that degrees can stack.

Monday, September 17, 2018

For-Profits, from Imitation to Infiltration


Several years ago, it was commonplace to argue that for-profit colleges were an existential threat to public higher education.  (I even did that towards the end of my book.) Now, for-profit colleges are taking on water and losing market share much more quickly than their public counterparts.  

I’ve been mulling over what happened.  Having worked in both sectors, I’ll offer a few thoughts.

The dominant model of entrepreneurial higher education is shifting from imitation to infiltration.  In other words, we aren’t seeing as many fully parallel institutions setting up shop and going toe-to-toe with community and state colleges as we did fifteen years ago.  But we’re seeing much more of a for-profit presence within the operations of public colleges than we used to. That presence ranges from e-packs to LMS platforms to ERP solutions to research on colleges themselves.  In some cases, such as Purdue Global, for-profit and non-profit have merged, and it’s still not entirely clear where the boundaries are. Kaplan was able to shake off its tarnished name and pick up the respected Purdue name while maintaining its for-profit mission.  Kaplan isn’t freestanding anymore, but it hasn’t gone away.

Publishers, too, are moving closer to full-provider status, while leaving it to the colleges themselves to deal with accreditation issues.  Pearson, for instance, covers everything from textbooks and lab manuals to online problem sets and tutoring. Cengage is countering OER with its “Netflix for books” model, in which it turns entire colleges into marketing arms for Cengage.  Investors are very much present and making money; they’ve simply changed the form of the investment.

Tressie McMillan Cottom’s excellent Lower Ed notes, too, the distinction between privately-held and publicly-traded for-profits.  The latter tend to be less stable, given investors’ insistence on constantly-increasing returns. When times are good, stock market money pours into the sector, allowing for a pace of expansion that no public institution can hope to match; that’s especially true when states have decided that higher education is a lower priority than it used to be.  But if returns start to lag, investors are unforgiving. They want what they want, when they want it.

I remain convinced that “patient capital” could do well, but that “impatient capital” is likely to struggle.  It takes years to show results in the higher ed sector, and years beyond that to build reputation. Phoenix, for instance, grew relatively sanely for decades until the mid-aughts, when it yielded to stockholder pressure and formed “Axia college,” effectively eliminating entrance standards.  From that point, the entire edifice started to crumble. Investors who are willing to put in the time (and take the initial operating losses) to build a reputation could see significant payoff eventually; in practice, that means going private.


To some degree, too, community colleges have adapted.  Some of the innovations that for-profits adopted to gain currency were relatively easy to imitate.  Evening and weekend programs, and later, online programs, met real needs of working adults. In some cases, for-profits had first-mover advantage in adopting those approaches.  But community colleges have caught up, and offer both lower prices and better reputations. The competitive advantages of those were always bound to be temporary.

The relative victory of the publics, though, comes in part by imitating what they opposed.  In the for-profit sector, student retention was always a paramount concern. After all, from their perspective, a retained student is a repeat customer, and it costs much less to retain a customer than to attract a new one.  The “guided pathways” movement resembles, in some basic ways, the curricular structure that DeVry used 20 years ago. When I was there, the NJ campus offered only five majors. The entire system didn’t offer many more than that.  Courses were scheduled around cohorts, with “gen eds” taking the midday slot so that both morning and afternoon cohorts could take them. A short list of majors meant that they always critical mass to run specialized classes, and advising was relatively straightforward.  The system had its share of implementation issues -- transfer credits have an entropic effect on cohorts -- but the underlying assumption was clear. Years later, when community colleges started moving in that direction, I recognized it.

And that’s before even touching on adjunctification.  Anyone who wants to claim moral purity for non-profits has to come to terms with that.  

I don’t foresee a resurgence of the “imitation” model anytime soon, despite a remarkably different regulatory environment, but I do foresee the two models intertwining more.  The “infiltration” model fits well the needs of colleges facing public disinvestment, while maintaining deniability at the level of marketing. Purdue Global may look unusual now, but I wouldn’t be surprised to see similar hybrids develop as one sector craves respect and the other craves funding.  These days, the call is coming from inside the house.

Sunday, September 16, 2018

The Decline of Humanities Enrollments and the Decline of Pre-Law


I’ve been hearing variations on “crisis in the humanities!” ever since college.  Back then it was largely about content; it was the early stages of the “canon” wars.  But even then we used to hear, on a regular basis, that fewer students majored in the humanities than used to.

It was mostly measurement error.  Humanities enrollments spiked around 1970, then subsided to their historic level by the early 80’s and stayed fairly steady for the next few decades.  The postmodern wave came and went, with no discernible impact on enrollments one way or the other. Narratives of decline that took 1970 as the point of contrast were based on mistaking an aberration for a norm.  If you move the start date back several decades, it becomes clear that the period from about 1965 to 1975 was a fluke. Once enrollments regressed to the mean, the subsequent battles over multiculturalism, cultural studies, postmodernism, and the like didn’t move the needle.

Over the last five years or so, though, the oft-issued warnings have finally started to come true.  Enrollments in the humanities and the more qualitative social sciences are dropping, especially in the four-year sector.  (They remain strong in the two-year sector, where they’re heavily represented in general education requirements.)

Benjamin Schmidt, from Northeastern, has a good essay in the Atlantic speculating that the cause of the recent drop is the aftershock of the Great Recession.  To which I’ll respond, well, kinda.

Schmidt correctly dismisses some of the usual canards around the narrative of decline.  No, feminism and multiculturalism aren’t to blame; enrollments remained steady for decades after they became integral to the enterprise.  No, it’s not about postmodernism; again, basic chronology debunks that.

Schmidt lands instead on fear of unemployability, and the greatly exaggerated differences in employability that undergraduates often imagine.  Economically, on average, you’re no better off with a major in biology than a major in history, but students don’t necessarily know that; they hear “STEM STEM STEM” all their lives, and assume that biology is included.  Schmidt shows his roots as a historian in resorting to a sort of secular Calvinism to explain student behavior; they’re doing the sorts of things that they think economically rational people do. They’re getting it wrong, but the error doesn’t dismiss the motive.  He goes on to lament the pressure that students are under to treat their education instrumentally, in narrowly economic terms. The strongest point of his argument, I think, is in noting that in the military academies, a similar decline has not occurred. In the academies, post-graduation employment is guaranteed.  Where anxiety about post-graduation employment hasn’t increased, humanities enrollment hasn’t decreased. It’s not proof, but it’s consistent with the theory.

I’ll show my roots as a political scientist to raise a partial objection.  It’s a myth that humanities majors don’t care about post-graduation employment.  What changed was the safety valve of subsequent law school enrollment.

Law school was long the default post-graduation plan for majors in qualitative fields.  As long as you had the prospect of a lucrative legal career after college, you could safely major in English or poli sci.  Those students didn’t ignore the vocational imperative; they just postponed it. And for a long time, that worked pretty well.  

But the Great Recession, combined with AI and offshoring, did a number on law as a career option.  Law school debts kept going up, but the employment picture for new lawyers got abruptly worse. Applications to law school dropped precipitously.  

As interest in law school dropped, we shouldn’t be surprised that many of the pre-law feeders dropped, too.  Poli sci was the classic pre-law major, and it has taken it on the chin, despite a political environment which demands analysis more than any in my adult lifetime.  I’m honestly at a loss when I see sections of American Government only partially filling in 2018; you’d think that it would be the hottest ticket in town. But no.

Having spent time at DeVry during the first tech boom, and the first tech crash, I can attest that trying to time the job market years in advance is a tricky business.  (The Chronicle had a decent piece last week on the Bowen report of the late 1980’s, predicting dire shortages of humanities faculty in subsequent decades. If a jobs forecast has ever been more spectacularly wrong, I haven’t seen it.)  Job markets fluctuate with the larger economy, with political changes, with technological shifts, and with all manner of other variables. I’m inclined to believe that some of the more fundamental academic skills -- sorting through lots of partial information, synthesizing it into something coherent, and communicating that synthesis clearly -- will continue to command premiums.  But it’s hard to get a lot more specific than that, especially at the four-year level.

Community colleges have the relative advantage here of being able to focus more on short-term turnarounds.  If a student needs a job in a year or less, this is the place to go. But guessing what will be hot ten years from now is pretty daunting.  There was a time when law seemed like a sure thing. Now, not so much.

So yes, humanities enrollments have declined, and yes, it seems plausible that the recession had a lot to do with it.  But no, it didn’t introduce vulgar economics into the choice of majors. Vulgar economics were always there. The recession simply took away an escape valve, and students responded accordingly.








Thursday, September 13, 2018

The Admissions Two-Step


On the way back from the Michigan trip, The Boy and I were talking about the various schools on his list, and how he compares them to each other.  It quickly became clear that the ranking process is more complicated now than it used to be.

I’m old enough to remember when college admissions was mostly a one-step process.  You’d get in, or not. There were a few variations -- waitlists, say, or the early decision option -- but they were understood as variations on a theme.  Students who were applying to multiple places would mentally rank them in a preference order, and as long as the decisions came back in binary form, that was pretty much that.  Students applying to selective places might do a “reach” or two, a couple of good bets, and a safety school or two.

Now, it’s a two-step process.  Rejections are still rejections, and probably always will be.  Waitlists are still waitlists, with all of the nail-biting that may imply.  But acceptances have taken on new shades of gray. With the cost of college having risen so dramatically over the last few decades, the decision process doesn’t stop once you have the “yes” pile.  It includes waiting for, possibly appealing, and comparing financial offers.

For most of us, my family very much included, list price is not an option.  Luckily, I’m ensconced in the system enough to know that list prices are not to be taken seriously.  An acceptance to a college that charges $60,000 per year and actually expects us to pay that is effectively a rejection.  And list prices don’t necessarily correlate with actual charges in any consistent way.

I told him -- and he already knew -- that the most he can do now is to decide on places he could see himself thriving, and put in the best applications he can.  Then wait and see the offers, and compare those.

The frustrating part for him is that he has no idea at this point how the offers will compare, so he really can’t do a priority list that means anything.  It feels chaotic. The frustrating part for me is knowing that it’s entirely possible that he might get in to some places he considers wonderful, and be unable to go.  But in those cases, the parents are the bad guys.

I’m told that this is a very American problem.  It’s a function of several factors, not the least of which is state disinvestment in public higher education.  My grandfather worked as an electrical lineman for Detroit Edison; he was able to send his daughter to the U of M and his son to a local technical college without much strain.  Even after inflation, I make considerably more than he did, but sending even the first kid to college is financially daunting. Something changed.

And that’s with pretty good knowledge of how the system works.  Someone who takes list prices literally, or who doesn’t know to shop financial aid offers around, might preemptively narrow the field more than it needs to be narrowed.  That often plays out along lines of race and class in predictable ways.

I’ve been heartened to see some schools move away from “early decision” and towards “early action” instead.  They both offer earlier answers, but the latter isn’t binding. If you accept an early decision offer, you’re basically taking it on faith that the financial offer will be acceptable.  They have you over a barrel. Early action offers still allow for comparisons. For schools that place any value at all on economic diversity, early decision simply must go. It amounts to an express lane for the wealthy.

TB is a great student and a great kid.  He’ll be fine, and any college would be lucky to have him.  But it would be nice if we could go back to the one-step process.  The second step just feels mean.