Thursday, August 30, 2018

The Common First Semester


An idea came up on campus again this week that comes up from time to time.  Couldn’t we save everyone a lot of time and angst by coming up with a common first semester that all, or nearly all, students could take?

At first glance, it seems to make a lot of sense.  We know what the highest-enrolled classes on campus are, and most of them are taken early.  A common first semester, perhaps with a meta-major course built in, would streamline advising and registration and make it far easier to ensure that students take courses that count.

I could see the idea working well at a selective, residential liberal arts college.  It’s a variation on the “freshman seminar” idea, with the goal of starting everyone off on equal footing and ensuring that everyone has a similar foundation on which to build.  We might focus more on skills than on the canon, at least at first, but the general idea is similar.

The catch is that the students aren’t.  

Very few of our students take 15 credits in their first semester.  And of those, most place into at least one developmental (or “foundational”) course.  Some bring transfer credits. Any of those would throw off a tightly programmed sequence; combine a few, and it gets worse.  

Pragmatically, I believe that it’s important for students to get at least one class in the first semester that they recognize as being part of why they went to college.  The traditional remediation model is based on an “eat your vegetables” approach, in which students have to start with the subjects with which they’ve struggled the most in the past before they get to the good stuff.  That has entirely predictable effects on motivation. For that to work, of course, the class that fills the “good stuff” slot will necessarily vary.

All of that said, though, I know that some community colleges have done it.  I’m just unclear on how.

So, my wise and worldly readers, I turn to you.  Have you seen a community college (or public four-year college with a similar student profile) successfully get to a common first semester?  If so, how did they do it? And what did you learn that falls under the category of “if we knew then what we know now?”

Any how-to’s would be greatly appreciated.

Happy Labor Day weekend!

Wednesday, August 29, 2018

"What Would Help?"


Every so often, I have one of those conversations in which an initial misunderstanding inadvertently lays the groundwork for a good exchange.  That happened this week with Paul Glastris, the editor of Washington Monthly, who called to take issue with my critique of WM’s latest college rankings issue.

WM’s rankings are intended as a sort of rebuttal to the US News rankings.  The US News rankings reward wealth and prestige, so they tend to reinforce existing hierarchies.  As Glastris put it, “the hierarchy within the profession of higher education is not aligned with the public interest.”  So instead of looking at “inputs,” WM tends to focus on student outcomes, with a special focus on lower-income students.

My critique was that the US News rankings gain their power through their broad appeal to a constituency of parents and prospective students; the sheer size of the constituency, and the subsequent effects on enrollments, gives it influence.  For all of its well-known flaws, it carries weight. That’s why so many colleges cheat on it by supplying false information. The WM rankings, by contrast, have almost no discernible constituency. They really aren’t useful from a comparison-shopping perspective, and they’re too inside-baseball for many policymakers.  They might appeal to a thin slice of equality-minded higher ed nerds (hi!), but there aren’t as many of us as one might like, and our political clout is modest at best.

In other words, the critique was based on broad agreement with the goals of the WM piece.  It was largely tactical. A policy argument without a constituency is unlikely to catch on.

To which Glastris asked, reasonably, what an egalitarian analysis with a constituency would look like.  What would help? So, what would help?

It’s a fair question, so I’ll put it to my wise and worldly readers, and then share a few thoughts.

He conceded a couple of suggestions.  A single search bar in which you could enter the name of a school, without first specifying a category, would help.  A category specific to HBCU’s, and another specific to women’s colleges, would make sense. He agreed that it would be nice to be able to rank public universities on cost to students for out-of-state students, but the data aren’t available for that.  (For example: is the University of Michigan still a good deal if you’re paying out-of-state tuition?) And the section on community colleges should include degrees, not just certificates. So that ground is covered.

I’m thinking that starting with exceptions might make sense.  For instance, their list of top national universities starts with Harvard, Stanford, and MIT.  That’s hardly revelatory. But a list of “Top Surprises” or “Most Underrated” could be of interest to parents and prospective students, and could make some political points at the same time.  

Financial aid remains a black box.  I eat, sleep, and breathe higher ed, and the best advice I could give my son was to apply to a bunch of places and see who gives him the best offer.  But there is some comparability. For example, most colleges don’t commit to meeting full need. They could easily be ranked by how much “gapping” they do, with less being better.  And don’t count loans as aid.

Schools with more economically diverse student bodies that have tremendous success getting students into law school or med school would be good to know.  It’s one thing to get students through your own program; that can happen in ways legitimate or illegitimate. But if they continue to do well at the next level, that’s a good sign that something is going very right.  That’s yet another argument for tracking upward transfers from community colleges, of course, but it’s also an argument for finding the affordable schools that make good “feeders” for law, med, or grad schools.

That’s separate from salary data, since students in school don’t usually earn much yet.  But it would be helpful to know. The Boy wants to be a doctor; if there were good information on the most affordable and successful feeder colleges for med school, I’d want to see it.  And I don’t think I’m alone in that.

Given the default assumption that resources correlate with results, the places that get better-than-expected results with fewer resources probably have something to teach us.  

Wise and worldly readers, what would you add?  If you wanted to generate comparative college data that would appeal to enough people to matter, and that wouldn’t just reward the usual suspects, what would help?

Tuesday, August 28, 2018

In Defense of the $999 Textbook


https://www.insidehighered.com/digital-learning/article/2018/08/28/universitys-999-online-textbook-creates-confusion-and-outrage

(pause)

(scratch head)

(cough)

(shrug)

I got nothin’.

(walks off stage)

Monday, August 27, 2018

Thoughts on the Washington Monthly College Guide


When I taught writing, I told the students that they needed their intended audience.  Were they writing for children, for educated adults, or for specialists in a field? Factors like those would affect the choices a good writer would make.  


That wasn’t how I was taught, but I think it holds up.  It moves away from the idea that there’s an objectively correct way to write, and towards the idea that writing takes place in (and assumes) a context.  Rather than pure self-expression or channeling of a muse, the point is to communicate to a reader.


I thought about that in looking at Washington Monthly’s latest college guide.  It’s an alternative to the US News rankings. The US News rankings tend to reward wealth, reputation, and exclusivity, as opposed to performance with the students they actually get. (Judging by the number of stories about colleges and graduate programs sending US News false data, the rankings must carry great weight.)  The Washington Monthly rankings look instead at indicators like performance gaps between students on Pell grants and students who aren’t; levels of community involvement; graduation rates; and the upward mobility of graduates. Simply admitting a bunch of rich kids won’t cut it.


As a full-time educator and part-time policy wonk, I get the appeal.  A college whose students consistently outperform what their demographics would have predicted is doing something right.  As a taxpayer and a citizen, I’m more inclined to want to support colleges that do right by a broad swath of students than to support a bunch of semi-talented and overentitled rich kids.  


As a community college person, I was a little annoyed at the complete disregard of transfer, but that’s an asterisk.  That’s an easy fix for a subsequent year, should they so choose.


As a parent, though, I’m not taken with it.  The Boy is starting his senior year of high school next week, so we’re in the thick of the college search.  (He has already announced that living at home, and especially attending a school where his Dad works, is entirely out of the question.  Fair enough.) That means looking at program offerings and location, but also at quality, both real (as far as can be determined) and perceived.  He wants to go someplace “good,” and I want him to have that opportunity.


The US News rankings are aimed at parents and students.  That’s where they get their power. Ambitious students want to know that they’re picking places that will pay off.  The rankings are deeply flawed in all sorts of ways with which most of us are familiar, but they’re accessible. And they address at least some of the questions that parents and students have.  To the extent that rankings drive applications and/or enrollments, they matter to institutions. Their weight comes through their function as a consumer guide.


The Washington Monthly rankings, noble as they are, are nearly useless as a consumer guide.  Part of that is basic formatting. For example, to look for a particular college, you first have to guess which category it’s in.  There’s no “search” box that includes the various categories. Is Montclair State under “regional,” “Master’s,” “liberal arts,” or “best bang for the buck?”  The only way to know is to chug through each category and keep entering until it pops up. That assumes a lot about the reader.


It also assumes that, say, Pell success gaps are as important as external reputation.  And that may be true at a policy level. But as the parent of a non-Pell student, it’s not really relevant.  From a shopper’s perspective, I’m concerned about what the school will do for my kid. Many of the factors WM considers don’t tell me that.  


For example, compare a school that gets above-average results with average students, to one that gets excellent results with excellent students.  The first one gets props for punching above its weight, but a student will get a better education at the second. The first speaks to policy, the second to shopping.  Put differently, the first speaks to wonks, and the second to parents. And there are a lot more parents than there are wonks.


The difference matters because what makes US News so powerful is precisely its attention to parents and applicants.  Its oversimplifications make it accessible, which gives it power. WM offers a much more ethically informed take, but at the cost of usefulness, at least to people outside of policy wonk circles.


“Best Bang for the Buck” comes closer to a parental perspective, but there, too, it makes a common and deeply frustrating category mistake.  For public universities, it assumes in-state tuition. But that’s only true if you’re in-state. Okay, the University of Michigan provides an excellent value to Michigan residents.  That’s great. But is it still an excellent value for people paying out-of-state tuition? The rankings don’t say. Given that most of us are non-residents of more places than we’re residents of, that seems like a major omission.  It seems like it shouldn’t be that hard to offer two rankings for each public institution, reflecting in-state and out-of-state, respectively. That’s the sort of thing a wonk might miss, but a parent will pick up on right away.


I offer these thoughts in hopes of helping WM to improve.  The flaws in the US News rankings are well-known, and having people who actually understand both policy and education weigh in makes sense.  But don’t forget the reader. US News, for all of its flaws, got the reader right. If you don’t do that, the piece doesn’t work, no matter how well-intended it is.

Sunday, August 26, 2018

Conditional Acceptance, as Seen from Here


I was struck, in reading the recent Hechinger Report piece on conditional acceptance, at how negatively it was portrayed.  Within the frame of reference they used, it’s understandable, but I can attest that it looks very different from here.

In this context, conditional acceptance refers to selective universities or colleges allowing applicants to enroll, but not right away.  First, they have to spend some time at another institution, often with requirements around courseloads and GPA’s. If the applicant follows the plan and meets the requirements, she’s in.  The piece profiles one student who applied to Cornell on the condition that she spend a year at Ithaca College first; she took the deal.

The practice is portrayed as secretive and somewhat ethically suspect.  It’s a way for selective institutions to allow in some students who won’t “count” in their selectivity statistics.  It can be a consolation prize for the marginally talented uberwealthy, or for filling in demographic holes in an entering class.  In the case of graduate programs with lots of international students -- less common than they used to be, but still -- they can be a way to admit students with talent but with limited proficiency in English.  Spend a year in the ESL program at a community college, the university might say, and if you do well, you’re in.

I can understand why people who are playing the exclusivity game might see conditional acceptance as a form of cheating.  But from a community college perspective, it looks more like a form of guaranteed transfer.

Community colleges catch flak when their credits only transfer piecemeal.  That’s because many people don’t know, or haven’t thought through the implications that, the receiving institution makes the decisions around acceptance or denial.  The CCRC has noted repeatedly that credit loss upon transfer is a major obstacle to on-time graduation. It costs time and money, and students are (rightly) insulted by it.  

But in the case of conditional acceptance, the student has both a path and a guarantee.  Follow the prescribed path and get the grades you need to get, and you’re in. That provides a powerful incentive to perform well, and assures the student that the effort will pay off.  That’s a much more powerful message than a simple rejection.

From an institutional perspective, students like these are on a mission, and students on a mission tend to finish what they start.  There’s a catch when the time in purgatory is less than the full associate degree: the student will count against our graduation rate, despite getting exactly what she came for.  That’s a measurement error, and it has consequences, but it’s fixable in principle. It’s also fairly common now, except that now it tends to happen without guarantees. Guarantees make a difference.

I understand that many students offered conditional acceptance reject it, going instead to places that accept them outright.  That makes sense; they want what they want. It isn’t for everyone. But for the folks for whom it makes sense, I can see a lot of upside.  From here, it doesn’t look bad at all.



Wednesday, August 22, 2018

The Attention Problem


“Why don’t they see it?”

Melinda Karp’s article about colleges struggling to move from reform proposals to actual implementation struck a chord with me.  I’ve been in the “why don’t they see it?” position enough times to have obsessed about this for a while.

Karp was a longtime member of the CCRC, second-in-command for a while.  Now she’s working with colleges across the country to help implement some of the reforms that she studied and championed.  (Full disclosure: we’re friends.) Her piece describes how easily sand gets in the gears, and some of the steps that colleges need to take to get from here to there.

She comes at the question from the perspective of a researcher; I come at it from the perspective of a practitioner.  We notice many of the same things.

She notes, correctly, that some colleges want something like “Guided Pathways” to come prefabricated, or as a simple checklist: do this, this, and that, and voila!  That rarely works, for a set of reasons. At a basic level, each context is different. That’s a function of factors ranging from different state laws to collective-bargaining status to local cultures.  For example, when I was in Massachusetts, there was a state rule declaring that no full-time faculty could teach a course that started after 4:00 p.m. New Jersey has no such rule. In collective-bargaining settings, there are often old grievance settlements or arbitration findings that become binding in ways that nobody could have foreseen at the time.  Those are facts of life.

But I think there’s a more fundamental issue, too.  I think of it as the attention problem.

As a practitioner who participates in the policy wonk world to some degree, I spend a lot of time thinking about ways to structure the college to make it better.  I engage with research and researchers, attend conferences when I can, correspond with people doing terrific work, and try to bring the best of it to bear locally, as circumstances allow.  I also deal intimately with the local rules of the game and local culture. Having worked at several different community colleges, I even bring something of a comparative perspective to bear.  And I have the wisest, worldliest readers the interwebs have to offer.

Put differently, I stew in this stuff.  

Most people don’t.  They’re focused -- appropriately -- on other things.  Faculty are largely focused on their classes. Staff are focused on their jobs.  Unions are focused on protecting incumbent employees. That’s reasonable -- in my faculty days, I focused much more on my classes and my students than on anything the administration said -- but it leads to the sort of instrumentalist reductionism that Karp laments.  If I have a bunch of other things on my mind, and someone is talking to me about Guided Pathways, I’ll immediately boil it down to “what does this mean to me?” That can quickly lead to the sort of fear-based nitpicking that some cultures are quick to deploy against anything new.

Getting past the “checklist” vision that Karp rejects requires people spending the time, with open minds, to engage with an idea at length.  In the very short term, most don’t see a reason to do that. They’re busy. I’ve had the experience many times of working well with a small group, making great progress, only to bring an idea to a larger group and see it attacked self-righteously by people using arguments that they don’t know were discredited a long time ago.

Odessa College is the exception that proves the rule.  It was able to enact a fundamental transformation of its academic calendar -- moving to shorter semesters -- and to achieve remarkable gains in student success, with the greatest gains accruing to the students who typically perform the worst.  In higher ed policy circles, that’s the gold standard. But the reason it was able to do that is that it had a gun to its head; the state of Texas had plans to close it. That solved the attention problem. Absent a visible, undeniable, existential threat, it can be a lot harder to get folks to look up at the big picture.  Of course, by the time a threat is undeniable, it may be too late.

Thoughtfully adapting complicated ideas to local contexts requires, well, thought.  It requires recognizing the importance of the idea, and being willing to devote time and suspension of disbelief long enough to get into the weeds.  And it requires large numbers of people to do those things, not just a few brave early adopters. That’s a larger ask than one might expect.

The appeal of the prefabricated checklist is that it offers the prospect of skipping the “thoughtful attention” phase.  That may sound sinister or stupid, but the attention problem is often fatal. The temptation to make it irrelevant is real.

Why don’t they see it?  They don’t know why they should.  Get past that, and the rest falls into place.  How hard can that be?...

Tuesday, August 21, 2018

The Other Use of Standardized Tests


I never acquired a taste for basketball, despite The Boy having spent years playing it.  But he did, so I’ve spent plenty of time at his games, and have taken him to college games.  It’s a running joke with us that whenever we watch elite teams -- either high-level college teams in person, or pro teams on tv -- I groan whenever a player misses a free throw.  It’s even worse when a player at that level misses two in a row.

That’s because free throws don’t change.  Theyre the same distance and angle every single time, and have been since the player’s childhood.  They’re unblocked, by definition. They aren’t rushed. To my mind, elite players should sink free throws as a matter of course.  

They’re a sort of standardized test.  A shot from the field can be different from others, depending on all sorts of variables, but a free throw is a free throw.  It’s a constant. You can compare free throw percentages over time and actually learn something. It doesn’t tell you everything -- if it did, Shaq wouldn’t have had a career -- but it’s something real.

For most of my career, it has been an article of faith among progressive academics that standardized tests are terrible.  They coat existing stratifications with a patina of “merit,” the argument goes. They’re only loosely correlated to academic success.  They’re reductionist. And there are elements of truth to all of those.

But as with free-throw percentages, they can also tell you something.  That’s why I was happy to see this piece from Brookings about the results of the state of Michigan giving free SAT or ACT tests in public schools during the school day.  Even conceding the very real issues with standardized tests, they also brought to light thousands of academically talented students from low-income backgrounds or out-of-the-way places who might otherwise have gone unrecognized.  (Happily, they’re making room for the SAT’s by dropping other standardized tests, so they aren’t crowding out instructional time.)

Standardized tests’ function as an equalizer may be part of why some elite private high schools are dropping AP tests altogether.  If a kid from Nowhere Special Public HS can score the same “5” as a kid from Snooty Prep, then what’s the payoff of Snooty Prep? From SP’s perspective, too much transparency could be dangerous.  It could reveal something they’d rather not reveal. Equalizers aren’t appealing when you’re on top.

Equalizes work because “good” students aren’t only found in “good” schools.  They’re everywhere, hiding in plain sight. As a society, we go to great lengths not to notice.  

Talent-scouting was actually one of the original purposes of the SAT.  James Conant, a president of Harvard, wanted to recruit students of talent even from out-of-the-way places.  Grade scales may be quirky or inconsistent, but a single test with a single scale offers a reality check. It gives the talented kid from East Nowhere a chance to prove what she can do.

Those of us at community colleges, though, have known about hidden talent for a long time.  Open-admissions policies allow students fresh chances to prove themselves. To the extent that the “free SAT” movement plays into the “undermatching” argument -- which suggests that it’s tragic when an academically talented student goes someplace that isn’t selective -- I have no use for it.  But to the extent that it helps us all see that talent is distributed more widely than we currently perceive -- and that there’s much to be gained by improving the resources available to the colleges that lots of talented students from East Nowhere attend -- I’m all for it.

Yes, standardized tests carry all sorts of baggage, some of it deservedly.  But they also make visible some living, breathing rebuttals to the idea that academic talent tracks parental income perfectly.  They can even form the basis for turning the “undermatching” thesis on its head. Michigan’s motivations may have been mixed, or confused, but it may have stumbled into something good.  I’m on board.

Monday, August 20, 2018

The Eye in the Sky


Technological progress has been great for nostalgia.  Anyone over 40 (maybe 30?) knows the experience of going down a rathole on YouTube, seeing clips from shows you watched as a kid and having the uncanny sense of vague recognition combined with abject horror that you once thought they were good.  Spotify does the same thing for music; the songs my parents played a lot when I was a kid are sort of familiar, but when I hear them now, they often register differently.

This week I discovered that Google Earth can do the same thing, but with places.  I harnessed the eye in the sky to look anew at houses I grew up in, or frequented as a kid.  It didn’t go as planned.

The house in which I spent ages 2-13 looks about the same, except that the trees are bigger.  But doing the 360 degree view, I noticed that the house across the street is boarded up. A quick look on Zillow confirmed that it was foreclosed on a couple of years ago, and now sits opaque, with a menacing-looking red X on one of the boards.  That was...odd.

The house my Dad lived in for a few years after the divorce is also boarded up.  It, too, was foreclosed on a few years ago. I felt bad for the folks in the house across the street with its “for sale” sign up.  Good luck with that.

The house my grandparents lived in for most of their lives, the single consistent location throughout my childhood, is empty now and, yes, foreclosed on.  The listing on Zillow suggests that prospective buyers bring a flashlight.

Coming on top of each other in relatively rapid sequence, the effect was jarring.  The middle-class worlds of my childhood, in which I formed my sense of how the world works, are crumbling.  

Part of that is geography.  The first two houses were outside of Rochester, New York, the economy of which took a beating as Kodak downsized.  The third is outside of Detroit, the economy of which has taken a long series of beatings. Those inland cities don’t support the scale of middle-class life they once did.  

Meanwhile, here on the coast, there’s a pattern of young people leaving because they can’t afford to live here.  The Chronicle features an article this week about New Jersey trying to reduce the number of high school graduates it exports, which is the highest in the country.  It spends a lot on K-12 education, with NAEP scores consistently among the top three states in the country, but then does a relatively indifferent job of supporting public higher education.  And that has been true for decades. (I’ve even mentioned it before here.)

Richard Florida’s work on the creative class suggests that places that attract lots of talented young people will thrive; those that export them, presumably, will not.  Rochester did a great job of exporting talent; nearly everyone in the honors classes in high school got the hell out as soon as they could. Now, it’s struggling at a level that would have seemed implausible back then.  Detroit’s struggles are well-known, but the contrast with Ann Arbor is hard not to notice.

I’m concerned that if New Jersey (and Connecticut, and Vermont, and…) doesn’t realize what’s happening, it’ll wind up in a similar spot, and faster than it realizes.  

If the eye in the sky looked instead at, say, Austin or Seattle, I’d bet that the impression would be different.  Instead of boarded-up houses, it would see growth. And that growth came from smart, ambitious, educated young people either staying or returning because they found it welcoming.  They didn’t sense stagnation and bolt.

It may be counterintuitive to warn the state with the highest population density in the country about people leaving.  But the folks who graduate a strong K-12 system and head off to the Stanfords of the worlds may not come back, and we aren’t doing a lot to attract new ones.  And as challenging as density can be, it’s much worse when combined with decline.

Yes, there’s obvious self-interest in advocating for a really aggressive move by NJ to build its higher ed sector and try to keep and attract the talented young.  I’ll admit that. But I’ve seen what happens if you don’t. It isn’t pretty, and the eye in the sky has pictures to prove it.




Sunday, August 19, 2018

“Netflix for Books”


A major textbook publisher has announced an all-you-can-eat subscription version of textbook purchasing.  A student pays a set rate per semester, and has unlimited access to new copies of the latest books provided by that publisher.  It’s being touted as a major money-saver for students, and in some cases, it probably is.

That said, I have major misgivings.

The most basic is that the “Netflix” analogy is misleading.  Netflix carries movies and shows from many different studios.  But this publisher only carries what it publishes. It would be as if Netflix only carried content provided by Fox Studios.  Fox produces a lot of stuff, some of it (probably) quite good, but it’s only one producer. If something you wanted was produced by somebody else, well, too bad.  For that, you’d have to go outside of the subscription and pay extra, without any sort of deduction from the subscription price.

If the subscription covered everything in print, I’d be more open to it.  But if it applies only to one publisher, no matter who that publisher is, I have a problem with it.  

To carry the television comparison farther, we’ve seen what happens when a single cable provider has a local monopoly.  Its prices rise, and rise, and rise. The providers are very good at offering low introductory rates, but then jacking them up dramatically after the initial contract ends.  When I moved a few years ago from a place with one cable-internet-phone provider to a place that had two, my monthly bill dropped by a hundred dollars, even as the service improved.  A little competition goes a long way. Give a single publisher carte blanche, and after a brief period of good behavior, I’d expect to see a similar cost curve kick in.

And then there’s the implied pressure on faculty to maximize the value of the subscription by favoring one publisher.  If my class uses books in the agreement, then the marginal cost to students for books for my class is zero; if I use other books, then my students have to pay the full cost of the subscription plus the a la carte cost of my books.  That creates a strong incentive to stick with the subscription.

“But wait!” I hear you thinking.  (Hearing people thinking is a rare skill.)  “Aren’t you a proponent of OER? Isn’t that basically the same thing?”

It isn’t.  OER comes from a host of providers, and in a bunch of forms.  It isn’t limited to what one publisher wants to offer. Faculty can mix and match OER to get exactly what they want, drawing on as many sources as they like.  With the subscription, they’re held to whatever the publisher sees fit to provide. Also, with OER, we aren’t asking part-time students to pay a full subscription fee.

None of this is meant to cast aspersions on the quality of what that publisher offers.  It’s a major one, and some of its stuff stands on its own. In some tightly-prescribed cohort programs, like nursing, the “Netflix” model may even make sense.  But on a collegewide level, I’m deeply skeptical. I’ve been burned by cable companies enough times to be wary.

Wise and worldly readers, is my assessment fair?  Or is there an upside to the “Netflix for books” model that I haven’t noticed?

Thursday, August 16, 2018

Friday Fragments, Sustainability Edition


Thanks to Melina Patterson for highlighting this one.  It’s a bill for a semester at the University of Houston in 1975.  The total is $152.50. Correcting for inflation, that would be slightly over $700 now.

At $700 a semester, most students could work their way through.  But it’s juuuuuust a bit higher than that now.

In the year-to-year series of incremental changes, it’s easy to lose track of that sort of thing.  Take a step or two back, though, and the changes are seismic.

Take that trend line and project it forward, say, twenty years.  Assuming that real wage growth continues at its current pace, such as it is, there’s no earthly way to make that sustainable.  

--

These two stories next to each other do a nice job of encapsulating the dilemmas of folks trying to make college budgets sustainable.

One is about a consultant urging colleges to pare down their programmatic offerings, in order to attain greater operating efficiencies.  Each new program requires slicing the existing population thinner, and committing to running entire programs even as cohorts shrink with attrition.  The other is about colleges adding programs right and left in hopes of generating enrollment.

Those of us in the trenches know this dilemma well.  Growth requires taking risks, which involves suspending the focus on efficiency for a while.  (New programs almost never pay for themselves in the first year or two.) That can be a hard sell as money gets tighter.  But if you stagnate for too long, you won’t be able to cut your way out of decline.

--

Bryan Alexander does a nice job here of connecting the dots between OER and changes in commercial publisher behavior.

As regular readers know, I’m very much a fan of OER.  Some people aren’t, whether because of concerns around sustainability, quality, or faddishness.  What Alexander points out here, I think correctly, is that OER is helping to put pressure on commercial publishers, thereby helping both its fans and its detractors.  Cengage Unlimited, for example, is pretty clearly a response to OER; if open alternatives had not caught on, I doubt that the subscription model would have emerged. Having to compete with “free” is forcing publishers to rethink a pricing model almost as out-of-control as our own.  

OER isn’t the entire answer to college costs, heaven knows, but it may buy us some time to figure out more fundamental changes.  And it will do so in an ethical and aboveboard way. To the extent that improved access to books improves student performance -- which it does -- colleges can do well financially by doing good morally.  That doesn’t always happen. When we find opportunities like those, we should jump on them. If it buys us time to address the larger cost issues, even better.

--

Of course, the ultimate in unsustainability is childhood.  This year, The Boy will be a senior in high school. He’ll be heading out in just over a year.

He’s fine with it.  Heck, he’s excited about it.

(pause)

I will be.  

Really.

(pause)

No, really,  I will. I just...need a minute...

Wednesday, August 15, 2018

Ask the Administrator: Trying to Decipher Course Equivalencies


A left coast correspondent writes:

So I’m now a division chair at a large community college in California.  One of the most surprising aspects of my job has been just how hard it has been to evaluate course equivalencies.  Often times students come to us with syllabi that do not have complete information allowing our college to determine equivalent courses.  Our Admissions and Records department handles standard cases but anything tricky or incomplete gets sent to me.

Tracking down information from colleges is often challenging.  Faculty coordinators and department chairs are often unresponsive.  Maybe it’s because it’s summer (right when a lot of students are trying to get enrolled, but also when a lot of faculty at other colleges are on break…), but I have a lot of unreturned emails and phone calls.  Some colleges do not have a department admin support person who can answer questions. And don’t get me started on how hard it can be to figure out who to call based on college websites.

Is there a better way then what our college does to track down info to evaluate course equivalencies?  Students lose time waiting for us to figure out what we can give them credit for.


Been there.  And yes, it can be hard to get hold of people quickly in August.

The point about lost time is crucial, and often overlooked.  I’ve heard of colleges -- not naming any names -- that won’t reveal which courses a student will get credit for until after the student has committed to enroll there.  From a student perspective, that’s cheating. In the colleges’ (limited) defense, though, some decisions require more research than others.

Some states handle it by going with common course numbering across institutions, at least on the public side.  In my own career, I’ve seen “composition 1” designated English 108, English 101, and English 121. Your guess why is as good as mine.  And that’s with a pretty standard gen ed course; it’s often much more idiosyncratic when you get to more specialized material.

(When I was in Massachusetts, the state did a sort of double-entry common course numbering.  There was a master list of courses at the state level, and the registrar from each college had to indicate what the local numbers equated to on the state list.  That way, the local campus could have its own quirky numbering, but each college had a sort of secret decoder ring. It worked, in its way.)

In the absence of common course numbers -- whether public or secret -- colleges usually develop working lists from their most frequent sending schools.  The “greatest hits of transfer” may take a little while to compile, but once you have them, they can save a great deal of time and ensure consistency. For example, at Brookdale, we get a lot of students transferring in (laterally) from Ocean, Middlesex, and Mercer county colleges.  The registrar’s office knows how to read those transcripts. But we don’t get many from, say, Sinclair Community College in Ohio. A Sinclair transcript would have to be read individually.

When a registrar’s office gets a course it can’t decipher, the usual protocol is to refer it to the academic department or division.  The idea is that subject-matter experts would know what they’re looking at. Which is true, if they know where to look.

Ideally, you’d receive a syllabus from the course in question.  The syllabus should include the student learning outcomes for the course.  If you can match the SLO’s from the course coming in to the SLO’s from something on your campus, you’re good to go.  If you’re fluent in SLO-speak, you’ll even look at the verbs used and cross-reference them to Bloom’s taxonomy. (Lots of “identify” and “summarize” would indicate a lower-level class; more “synthesize” and “theorize” would suggest a higher-level one.)  Many colleges include SLO’s somewhere either in the online catalog or elsewhere on their websites. I’d start with those.

If you can’t track those down, samples of student assignments could also be useful.  The trick here is looking less at the student work than at the assignments themselves.  At what level were they pitched?

Some colleges also give the option of “area credit.”  That’s credit that isn’t as specific as the replacement of a particular course, but that’s more specific than the dreaded “free elective.”  For example, I’ve seen “transfer - humanities” for Portuguese 101-102 when the receiving school didn’t have courses in Portuguese. Languages aren’t interchangeable, so calling it “French” would be misleading, but it’s also tough to argue with a straight face that French is worthy of academic credit and Portuguese isn’t.  “Area credit” can give you a place to put work that’s obviously substantive and relevant, but that doesn’t quite fit an existing course on the books.

If none of those is available, you could always try to talk to the student to get samples of graded work.  It’s slow, but it’s a good-faith effort, and it will show that you’re trying.

Good luck!

Wise and worldly readers, I’m sure I’ve missed some methods.  What would you suggest?

Have a question?  Ask the Administrator at deandad (at) gmail (dot) com.

Tuesday, August 14, 2018

Place and Climate


In college, entirely by accident, I discovered that the laundry room in my dorm made an excellent study space.  Something about the white noise of the dryers provided just enough external stimulation to allow my restless self to focus, without getting distracted.  I carried that into grad school, where the local laundromat became a favorite study spot. Something about it just worked.

A good study space is a real find.  I always had trouble studying in my dorm, mostly because it always seemed like I should have been hanging out with friends instead.  The library could be a great spot, or not, depending on my mood and the time of day. I even found a few nooks and crannies around campus that did the trick.  

The key wasn’t so much the presence or absence of other people as it was the overall feel.  It had to feel distant enough from the dorm that the distractions of the dorm weren’t there to compete for attention, but still familiar and safe enough that I could relax.  

I thought of that in reading the latest report about student success and community college libraries.  It speaks not just, or primarily, to the published or electronic resources that libraries offer, as important as those are, but to libraries as spaces.  For students who don’t have great alternatives for study spaces -- laundromats notwithstanding -- libraries are lifelines.

At my previous college, my one architectural contribution was getting the library to set aside one room -- previously used for periodicals -- as a quiet study space.  It attracted a small, but consistent and self-enforcing, clientele. For all of the tech toys out there, sometimes students just need a quiet space with a clear purpose.  At my current college, we were able to leverage donor money to renovate dozens of small study rooms around the library. At the end of each semester, they’re so popular that the library had to devise a signup system.

Quiet is part of it, to be sure.  But I was taken by this line from a student:

“I’m a procrastinator so I need to be in a public space where other people are doing work as well.  It really helps me focus in on what I have to do and it feels like that’s the rhythm that everybody already is in in this space and so it’s easier for me to concentrate in those spaces.”

That climate is hard to replicate online, particularly when students don’t have quiet spaces at home.

On a commuter campus, as opposed to a residential one, libraries become that much more important.  On a cold or rainy day, it’s often one of the few places that students can go between classes, along with the cafeteria.  Typically, the cafeteria is a social space, and rightly so. The library can be the one place that allows students to focus.

Wise and worldly readers, what were your favorite study spaces in college?  Have you seen a campus develop a really successful one?