(Filed from the conference of The League for Innovation in the Community College, in a surprisingly chilly San Diego.)
I missed last year’s conference of the League for Innovation, so with a little distance, I’ve noticed some changes. The most striking is the sudden omnipresence of for-profits. The lanyards for the nametags bear the logo of Cappella University. Terry O’Banion, the former President of the League, did plugs for Walden U and Cappella U during his talk. Panels that are listed as if they were the fruits of research turn out to be designed to hawk software from a given vendor, which is the presenter’s “secondary” affiliation. It used to be that clear commercialism was mostly limited to the exhibition hall, where it was expected; now it’s becoming the content of presentations.
I’ve also been struck by the virtual disappearance of the phrase “community college movement.” In years past, I heard that a lot. This year, the only people who’ve used the phrase have been folks with “emeritus” in their titles. During the keynote, by Allan Golston of the Gates Foundation, I heard instead discussion of a “reform movement” for community colleges. The distinction is generational, and it gives me hope.
Anyway, a few observations thus far:
- At a wonderfully informative panel geared towards people applying for community college presidencies, I noticed that the audience was more than two-to-one female. That’s consistent with what I’ve seen at statewide meetings in my own state at every level below the presidency. Change is coming, and it’s coming soon.
- Sometimes it’s the little things that reveal a lot. Both of the speakers at the “future presidents” panel referred to “applying for your first presidency,” as if a second and third are simply assumed. It was a little jarring. One speaker recommended targeting a single campus of a multi-campus system for a first presidency, since the chancellor offers a buffer with the Board. Kind of a “starter” presidency. It had never even occurred to me to think of that.
- Judging by the discussion, the quality of Boards of Trustees varies pretty widely across the country. I can honestly say that I’m lucky on that count.
- Terry O’Banion spoke of the “New Normal” for community colleges. He got off one great line -- “we accept the top 100 percent of our applicants” -- but otherwise did a fairly predictable discussion of declining funding, rising enrollments, and the rise of the for-profits. I was a little taken aback when he used statistics about a forthcoming great wave of retirements from college administrations -- I could swear I’ve heard that phrase somewhere before -- to plug the for-profit educational leadership graduate program he runs. So it goes.
- The highlight thus far was a panel on developmental education. It featured two women from the Community College Research Center at Columbia, and one woman from the math department at Los Medanos College in California. Nikki Edgecombe, from the CCRC, is clearly a rising star; expect to hear from her in the future. She mentioned that one study the CCRC did of community colleges in Virginia in 2009 that showed that students who managed to evade developmental classes, even though they had “placed” into them, did just as well in college-level courses as students who actually did what they were told. She also pointed out that failing classes and dropping out are two different things, and that national data suggests that a surprising number of students drop out after passing developmental classes.
- Prof. Myra Snell, from Los Medanos, coined a wonderful word: “stupiphany.” She defined it as that sudden realization that you were an idiot for not knowing something before. The major “stupiphany” she offered was the realization that the primary driver of student attrition in math sequences isn’t any one class; it’s the length of the sequence. Each additional class provides a new exit point; if you want to reduce the number who leave, you need to reduce the number of exit points. If you assume three levels of remediation (fairly standard) and one college-level math class, and you assume a seventy percent pass rate at each level (which would be superhuman for the first level of developmental, but never mind that), then about 24 percent will eventually make it through the first college-level class. Reduce the sequence by one course, and 34 percent will. Accordingly, she’s working on “just in time” remediation in the context of a college-level course. There is definitely something to this.
- The keynote was a talk-show style interview by Mark Milliron of Allan Golston. They’re both working with the Gates foundation. The audio was godawful -- it sounded like the old 16 mm movies we used to see in elementary school when the sprockets skipped -- but from what I could actually hear, the content was heartening. Golston noted that developmental ed is the “Bermuda Triangle of higher education,” especially for adult students, for whom “time is the enemy.” He (correctly) noted the irrationality of measuring learning in units of seat time, calling for diagnostic exams that are diagnostic enough to allow students to take only what they actually need. The highlight for me, though, was at the end, when he addressed the need to be “radically practical.” In that context, he noted that a failure to address cost when discussing possible improvements actually does real harm.
Past conferences have featured more nostalgia, more swing-for-the-cheap-seats idealism, and more clubbiness among members of the founding generation. This one is notably more focused on cost and practicality. It seems more lightly attended -- the keynote audience was more than half empty -- but the questions far more relevant and useful. On to day two!
In which a veteran of cultural studies seminars in the 1990's moves into academic administration and finds himself a married suburban father of two. Foucault, plus lawn care. For private comments, I can be reached at deandad at gmail dot com. The opinions expressed here are my own and not those of my employer.
Monday, February 28, 2011
Thursday, February 24, 2011
Supplanting and Budget Cuts
(Or, why I can send an English professor to a math conference, but I can't send a math professor to a math conference.)
Earlier this week, I mentioned an interaction with the campus Money Guy that strained the limits of absurdity. In fairness, I should mention that sometimes I have to be the apparatchik, too.
Federal (and many other) grants typically come packaged with rules against “supplanting” college resources with grant resources. In plain English, that means that we can't use grant money to pay for things that the college would have paid for otherwise. The idea is to prevent colleges inventing projects, and getting grant money, to fund basic operations.
From the funding agency's perspective, this rule generally makes sense. Grants are usually intended to enable or foster something new, or at least something narrowly targeted. Generally, they aren't intended to be 'slush funds' to be used in whatever way the recipient sees fit. In extreme cases, that's essentially what open-ended supplanting would amount to. The agencies want to prevent a college from being able to say “thanks for the five million for the tutoring project, but we've decided to use it to offset state cuts instead.” He who pays the piper calls the tune; it's unsurprising that people or agencies who hand over significant cash want to maintain some level of control over it. If a college finds the strings attached to a given grant too onerous, it can simply choose not to apply. (We've actually done that.)
In isolation, that makes sense.
One of the tests often used to determine whether a given expense amounts to “supplanting” is whether the college has covered that expense before.
When you have reasonably steady-state (or growing) budgets, that's a pretty fair test. If the college paid for several faculty to attend, say, the CCCC conference (for teachers of composition) each of the last several years, then it would be hard-pressed to argue that using grant money to cover that cost this year isn't supplanting. It's hard to say that you wouldn't have done something this year that you've done routinely for the last several.
But when operating budgets are dropping fast, the 'supplanting' rule becomes a lot murkier.
Given rapidly dropping state aid and a host of fixed (or growing) costs, things like “travel” tend to be the first to be cut. That's not because we don't see the value of travel; it's just that the money isn't already committed, the way that money for salaries and physical plant is. Cutting a salary means losing a person; cutting travel just means cutting travel. Neither is great, but when it's one or the other, travel tends to lose.
After a couple of years, it's entirely possible to land in a spot wherein an otherwise intelligent administrator has to deny funding for the English professor to attend the writing conference, since that has to come out of college money that has been cut, while saying yes to the math professor who wants to attend the writing conference, since the grant can cover that. You say 'no' to the obviously valid proposal, and say 'yes' to the one that's a bit of a stretch, simply because the former had traditionally been covered internally and the latter hadn't. The rule against supplanting interacts with the operating budget cuts in ways that nobody intended, but that would be incredibly hard to prevent.
I'd love to be able to convince the Feds that historical patterns become irrelevant when operating cuts hit a certain level. But it’s tough to prove that absent the grant, you would have cut that particular item anyway. I can argue until I’m blue that the travel budget from two years ago is irrelevant now, since we’ve sustained so many cuts since then, but it’s an awfully difficult distinction between “replacing” cuts and “supplanting” them.
Wise and worldly readers, has anyone found a clever (and legal!) way around this?
Earlier this week, I mentioned an interaction with the campus Money Guy that strained the limits of absurdity. In fairness, I should mention that sometimes I have to be the apparatchik, too.
Federal (and many other) grants typically come packaged with rules against “supplanting” college resources with grant resources. In plain English, that means that we can't use grant money to pay for things that the college would have paid for otherwise. The idea is to prevent colleges inventing projects, and getting grant money, to fund basic operations.
From the funding agency's perspective, this rule generally makes sense. Grants are usually intended to enable or foster something new, or at least something narrowly targeted. Generally, they aren't intended to be 'slush funds' to be used in whatever way the recipient sees fit. In extreme cases, that's essentially what open-ended supplanting would amount to. The agencies want to prevent a college from being able to say “thanks for the five million for the tutoring project, but we've decided to use it to offset state cuts instead.” He who pays the piper calls the tune; it's unsurprising that people or agencies who hand over significant cash want to maintain some level of control over it. If a college finds the strings attached to a given grant too onerous, it can simply choose not to apply. (We've actually done that.)
In isolation, that makes sense.
One of the tests often used to determine whether a given expense amounts to “supplanting” is whether the college has covered that expense before.
When you have reasonably steady-state (or growing) budgets, that's a pretty fair test. If the college paid for several faculty to attend, say, the CCCC conference (for teachers of composition) each of the last several years, then it would be hard-pressed to argue that using grant money to cover that cost this year isn't supplanting. It's hard to say that you wouldn't have done something this year that you've done routinely for the last several.
But when operating budgets are dropping fast, the 'supplanting' rule becomes a lot murkier.
Given rapidly dropping state aid and a host of fixed (or growing) costs, things like “travel” tend to be the first to be cut. That's not because we don't see the value of travel; it's just that the money isn't already committed, the way that money for salaries and physical plant is. Cutting a salary means losing a person; cutting travel just means cutting travel. Neither is great, but when it's one or the other, travel tends to lose.
After a couple of years, it's entirely possible to land in a spot wherein an otherwise intelligent administrator has to deny funding for the English professor to attend the writing conference, since that has to come out of college money that has been cut, while saying yes to the math professor who wants to attend the writing conference, since the grant can cover that. You say 'no' to the obviously valid proposal, and say 'yes' to the one that's a bit of a stretch, simply because the former had traditionally been covered internally and the latter hadn't. The rule against supplanting interacts with the operating budget cuts in ways that nobody intended, but that would be incredibly hard to prevent.
I'd love to be able to convince the Feds that historical patterns become irrelevant when operating cuts hit a certain level. But it’s tough to prove that absent the grant, you would have cut that particular item anyway. I can argue until I’m blue that the travel budget from two years ago is irrelevant now, since we’ve sustained so many cuts since then, but it’s an awfully difficult distinction between “replacing” cuts and “supplanting” them.
Wise and worldly readers, has anyone found a clever (and legal!) way around this?
Wednesday, February 23, 2011
Trust Us, We're Experts
Historiann has a fascinating, and I think largely representative, take on a provocative article in the Washington Post about “fixing” higher education. The original piece outlines eight steps that it argues would make meaningful differences for colleges and universities in the US. Some of them are easy and obvious, like toning down the focus on athletics; others are deeply problematic, like junking merit scholarships. (For my money, there’s something fundamentally wrong when having a good jump shot is a surer ticket to tuition than building a strong record at chemistry or writing.)
The first one is somewhere in between. It’s “measure student learning.” Historiann dismisses this one out of hand, with a quick reference to No Child Left Behind and the following: “Let’s just strangle this one in its crib unless and until we get some evidence that more testing = more education.”
It’s a fascinating response, because it encapsulates so cleanly the unthought impulse that many of us have. Testing equals Republicans equals bullshit; now shut the hell up and write us large checks. Trust us, we’re experts.
It’s written a little more carefully than that, of course, but written specifically to defeat verification. It rejects any sort of “measurement,” but does so by calling for “evidence” that measurement works.
What would that evidence look like? Might it involve, say, measurement? If not, then on what basis could you use a term like “more”? Every meaning of “more” that I can fathom involves some sort of comparative measurement. But to do that, we’d have to agree on a measure. Unless, of course, that was simply a rhetorical flourish, a semi-ironic acknowledgement that such a thing could never be proven because, well, it just couldn’t.
The knee-jerk response to any sort of accountability rests on a tautology. We know better than anyone else because we’re experts; we’re experts because we know better than anyone else. Screw measurement, accountability, or assessment; we already know we’re the best. Just ask us! Now, about that check...
If the folks who care about higher education are even halfway serious about avoiding the traps K-12 is in, the first step is not repeating the same mistakes. “Trust us, we’re experts” simply is not a persuasive argument to the larger public. It may once have been, but it isn’t now, and it hasn’t been for a long time. The difference between Historiann’s perspective and my own is that she seems to assume that failure to defer to rank is the public’s shortcoming; I think it’s basically healthy.
Part of the reason that Academically Adrift has resonated as much as it has, I suspect, is that it argues something that most of us (and most of the taxpaying public) secretly know to be true: many college students skate through without getting appreciably smarter. I consider that a major problem, and one that would require some pretty fundamental structural changes to higher education to address.
Oddly, many of the same people who share Historiann’s dismissal of testing are among the first to decry poor student performance. We expert educators are expert educators, if we don’t mind saying so; therefore, any student failings must...wait for it...be the fault of the students! In fact, they’re getting worse all the time! Now, let’s talk about next year’s tuition increase...
After a few decades of that, the public is getting a bit, well, testy. And well it should.
At base, the popular perception that college is a scam can’t be ameliorated by assertions of expertise, truth, and virtue. If those worked, they would have worked by now. It will be ameliorated, or not, by showing the public some kind of real results. What those results should be is certainly open for debate; as a kid, I remember seeing the space program justified by the development of calculators and digital watches. It might take the form of some sort of exam, or it might take the form of success stories, or it might take the form of new graduates developing wonderful things. Which path to pursue strikes me as a fair and valid discussion. But if we don’t recognize that the basic impulse behind the testiness is essentially valid, we won’t get anywhere. Aristocratic pretensions aren’t gonna cut it; the “appeal to authority” isn’t terribly appealing. We need to show, rather than tell, the public that we’re worth supporting. Which means we need to show ourselves first. Strangling that impulse in the crib is not a serious answer.
The first one is somewhere in between. It’s “measure student learning.” Historiann dismisses this one out of hand, with a quick reference to No Child Left Behind and the following: “Let’s just strangle this one in its crib unless and until we get some evidence that more testing = more education.”
It’s a fascinating response, because it encapsulates so cleanly the unthought impulse that many of us have. Testing equals Republicans equals bullshit; now shut the hell up and write us large checks. Trust us, we’re experts.
It’s written a little more carefully than that, of course, but written specifically to defeat verification. It rejects any sort of “measurement,” but does so by calling for “evidence” that measurement works.
What would that evidence look like? Might it involve, say, measurement? If not, then on what basis could you use a term like “more”? Every meaning of “more” that I can fathom involves some sort of comparative measurement. But to do that, we’d have to agree on a measure. Unless, of course, that was simply a rhetorical flourish, a semi-ironic acknowledgement that such a thing could never be proven because, well, it just couldn’t.
The knee-jerk response to any sort of accountability rests on a tautology. We know better than anyone else because we’re experts; we’re experts because we know better than anyone else. Screw measurement, accountability, or assessment; we already know we’re the best. Just ask us! Now, about that check...
If the folks who care about higher education are even halfway serious about avoiding the traps K-12 is in, the first step is not repeating the same mistakes. “Trust us, we’re experts” simply is not a persuasive argument to the larger public. It may once have been, but it isn’t now, and it hasn’t been for a long time. The difference between Historiann’s perspective and my own is that she seems to assume that failure to defer to rank is the public’s shortcoming; I think it’s basically healthy.
Part of the reason that Academically Adrift has resonated as much as it has, I suspect, is that it argues something that most of us (and most of the taxpaying public) secretly know to be true: many college students skate through without getting appreciably smarter. I consider that a major problem, and one that would require some pretty fundamental structural changes to higher education to address.
Oddly, many of the same people who share Historiann’s dismissal of testing are among the first to decry poor student performance. We expert educators are expert educators, if we don’t mind saying so; therefore, any student failings must...wait for it...be the fault of the students! In fact, they’re getting worse all the time! Now, let’s talk about next year’s tuition increase...
After a few decades of that, the public is getting a bit, well, testy. And well it should.
At base, the popular perception that college is a scam can’t be ameliorated by assertions of expertise, truth, and virtue. If those worked, they would have worked by now. It will be ameliorated, or not, by showing the public some kind of real results. What those results should be is certainly open for debate; as a kid, I remember seeing the space program justified by the development of calculators and digital watches. It might take the form of some sort of exam, or it might take the form of success stories, or it might take the form of new graduates developing wonderful things. Which path to pursue strikes me as a fair and valid discussion. But if we don’t recognize that the basic impulse behind the testiness is essentially valid, we won’t get anywhere. Aristocratic pretensions aren’t gonna cut it; the “appeal to authority” isn’t terribly appealing. We need to show, rather than tell, the public that we’re worth supporting. Which means we need to show ourselves first. Strangling that impulse in the crib is not a serious answer.
Tuesday, February 22, 2011
Curriculum Above and Below
The outside world takes it for granted that colleges, particularly community colleges, should develop curricula to match the needs of employers.
The higher ed world takes it for granted that curriculum belongs to the faculty.
Deans are in the delightful position of trying to navigate between those two. The frustrating truth is that they’re both partly right, but both lean toward absolutism.
I’ve had plenty of discussions with employers over the years in which they’ve asserted with great confidence that they know precisely what they want. But when pressed, a couple of issues emerge. First, to the extent that they know what they want, they know what they want right now; a year from now is anybody’s guess. When programs take two years, that’s not a trivial distinction. Second, I’ve had to learn to ask the “how many” question early on. I’ve had employers tell me, in all apparent seriousness, that they absolutely, positively need people with skill set x. When I’ve asked how many people they need, the hemming and hawing started; in one memorable conversation, the answer was two. No, I will not start a program for two jobs. It will not happen, and it would be an abuse of taxpayer dollars if I did.
I actually had better discussions with employers when I was at Proprietary U, since they felt like they were on home turf and could let their guard down. There, they typically indicated that as long as students had a basic set of technical skills, what separated one student from another was the soft skills. I sat through many a program review in which the technical program deans seethed at me as the discussion went from their bailiwick to mine. The take-home lesson from that, for me, was that there’s a difference between the “foot in the door” skills and the “promotion and career” skills. Those who were merely trained may get the foot in the door quickly, if they were trained in the right thing at the right time, but they won’t last long and they won’t get promoted. Moving from working the help desk to managing the help desk requires the soft skills that real education can help develop.
The catch, of course, is that when you’re unemployed and desperate, all that long-term stuff is very much the kind of thing you will get to later. You need an income, and you need it now.
The grant-funded workforce development programs tend to focus on the quick hits. They want short-term programs -- nothing more than a year, and ideally much less than that -- that will get someone a foot in the door. There’s a perfectly valid reason for that, and I have no issue with it, as far as it goes. In my perfect world, the quick hit would get the student into a job post-haste, and the student would use the income from the job to support herself while she continued towards a real degree. Put out the fire, then rebuild the house. Sometimes that even happens, and I salute the folks with the tenacity to make it through that way.
The catch is that faculty, who own the curricular development say-so through the governance process, focus almost entirely on degree programs. They don’t want to ‘train,’ and they’ll use the term disparagingly. They want to educate, and they want the full two years (or, in practice, more) to do it.
That makes sense on its own terms. Given the choice, would you rather produce worker bees or the next generation of leaders? Given the choice, would you restrict yourself to teaching “how to” or add a layer of “why, and how do we know?” If you take the “college” part of “community college” seriously -- and I hope that every professor on campus does -- then of course you’d want to focus on degrees that actually mean something.
But not every student can take two or three years before making money. Some never will, and some will get around to it later after they’ve taken care of business. Basing everything on the assumed ideal of the first-time, full-time, degree-seeking student -- the IPEDS cohort -- is easier, but it doesn’t address the daily reality of the lives of most of the students who come here.
In the worst cases, which I’ve seen happen, some upper level of government -- either state or federal -- comes in with a semi-mandate to produce students in (whatever). The curriculum committee objects, largely out of resentment of encroachment on its territory. The initiative either dies in committee or escapes with minimal support, only to die on the vine shortly thereafter. New programs typically only get through curriculum committee when someone on the faculty is willing to be its champion. When a program is entirely new to a college and pushed from outside, there may not be a champion present, even if, objectively speaking, there should be. It dies for lack of a champion, and there’s no need to hire someone to be the champion in the absence of a program. There’s a chicken-and-egg quality to the dilemma. That’s why new programs tend to be offshoots of existing ones; existing ones actually have people on staff.
I can see a few ways to square the circle, but they tend to apply only in special cases.
One is when an administration is willing to hire in anticipation of a program being approved. In this fiscal climate, I consign this to “purple unicorn” territory, but it’s theoretically possible.
Another is when the local faculty is willing to champion something not its own. This does happen, from time to time, and it’s wonderful when it works. You just can’t count on it too often, and certainly not on the timelines that granting agencies tend to prefer.
Alternately, the grant could assume the cost of the professor(s). The catch here is the tenure clock. When the grant expires, the professors are either tenured or close to it. Unless the grants can be permanent -- a variation on endowed chairs -- this has obvious limits.
Assuming you can somehow square the circle, the most promising programs I’ve seen are structured as “career ladders,” in which various stop-out points with intermediate credentials are built in to the degree path. A student goes full-time for a semester or maybe two, and earns a credential good enough to get something above minimum wage. She then shifts to part-time status, and completes the degree while working. It’s hell on our time-to-completion stats, but it’s the right thing to do.
It would be awfully nice if granting agencies structured their programs with some recognition of the reality of shared governance. Anytime they’d like to start, I’d be happy to assist.
Wise and worldly readers, has your campus found a reliable and elegant way to address the valid concerns of both external agencies and faculty leaders?
The higher ed world takes it for granted that curriculum belongs to the faculty.
Deans are in the delightful position of trying to navigate between those two. The frustrating truth is that they’re both partly right, but both lean toward absolutism.
I’ve had plenty of discussions with employers over the years in which they’ve asserted with great confidence that they know precisely what they want. But when pressed, a couple of issues emerge. First, to the extent that they know what they want, they know what they want right now; a year from now is anybody’s guess. When programs take two years, that’s not a trivial distinction. Second, I’ve had to learn to ask the “how many” question early on. I’ve had employers tell me, in all apparent seriousness, that they absolutely, positively need people with skill set x. When I’ve asked how many people they need, the hemming and hawing started; in one memorable conversation, the answer was two. No, I will not start a program for two jobs. It will not happen, and it would be an abuse of taxpayer dollars if I did.
I actually had better discussions with employers when I was at Proprietary U, since they felt like they were on home turf and could let their guard down. There, they typically indicated that as long as students had a basic set of technical skills, what separated one student from another was the soft skills. I sat through many a program review in which the technical program deans seethed at me as the discussion went from their bailiwick to mine. The take-home lesson from that, for me, was that there’s a difference between the “foot in the door” skills and the “promotion and career” skills. Those who were merely trained may get the foot in the door quickly, if they were trained in the right thing at the right time, but they won’t last long and they won’t get promoted. Moving from working the help desk to managing the help desk requires the soft skills that real education can help develop.
The catch, of course, is that when you’re unemployed and desperate, all that long-term stuff is very much the kind of thing you will get to later. You need an income, and you need it now.
The grant-funded workforce development programs tend to focus on the quick hits. They want short-term programs -- nothing more than a year, and ideally much less than that -- that will get someone a foot in the door. There’s a perfectly valid reason for that, and I have no issue with it, as far as it goes. In my perfect world, the quick hit would get the student into a job post-haste, and the student would use the income from the job to support herself while she continued towards a real degree. Put out the fire, then rebuild the house. Sometimes that even happens, and I salute the folks with the tenacity to make it through that way.
The catch is that faculty, who own the curricular development say-so through the governance process, focus almost entirely on degree programs. They don’t want to ‘train,’ and they’ll use the term disparagingly. They want to educate, and they want the full two years (or, in practice, more) to do it.
That makes sense on its own terms. Given the choice, would you rather produce worker bees or the next generation of leaders? Given the choice, would you restrict yourself to teaching “how to” or add a layer of “why, and how do we know?” If you take the “college” part of “community college” seriously -- and I hope that every professor on campus does -- then of course you’d want to focus on degrees that actually mean something.
But not every student can take two or three years before making money. Some never will, and some will get around to it later after they’ve taken care of business. Basing everything on the assumed ideal of the first-time, full-time, degree-seeking student -- the IPEDS cohort -- is easier, but it doesn’t address the daily reality of the lives of most of the students who come here.
In the worst cases, which I’ve seen happen, some upper level of government -- either state or federal -- comes in with a semi-mandate to produce students in (whatever). The curriculum committee objects, largely out of resentment of encroachment on its territory. The initiative either dies in committee or escapes with minimal support, only to die on the vine shortly thereafter. New programs typically only get through curriculum committee when someone on the faculty is willing to be its champion. When a program is entirely new to a college and pushed from outside, there may not be a champion present, even if, objectively speaking, there should be. It dies for lack of a champion, and there’s no need to hire someone to be the champion in the absence of a program. There’s a chicken-and-egg quality to the dilemma. That’s why new programs tend to be offshoots of existing ones; existing ones actually have people on staff.
I can see a few ways to square the circle, but they tend to apply only in special cases.
One is when an administration is willing to hire in anticipation of a program being approved. In this fiscal climate, I consign this to “purple unicorn” territory, but it’s theoretically possible.
Another is when the local faculty is willing to champion something not its own. This does happen, from time to time, and it’s wonderful when it works. You just can’t count on it too often, and certainly not on the timelines that granting agencies tend to prefer.
Alternately, the grant could assume the cost of the professor(s). The catch here is the tenure clock. When the grant expires, the professors are either tenured or close to it. Unless the grants can be permanent -- a variation on endowed chairs -- this has obvious limits.
Assuming you can somehow square the circle, the most promising programs I’ve seen are structured as “career ladders,” in which various stop-out points with intermediate credentials are built in to the degree path. A student goes full-time for a semester or maybe two, and earns a credential good enough to get something above minimum wage. She then shifts to part-time status, and completes the degree while working. It’s hell on our time-to-completion stats, but it’s the right thing to do.
It would be awfully nice if granting agencies structured their programs with some recognition of the reality of shared governance. Anytime they’d like to start, I’d be happy to assist.
Wise and worldly readers, has your campus found a reliable and elegant way to address the valid concerns of both external agencies and faculty leaders?
Monday, February 21, 2011
When Accountants Attack
This actually happened.
Probably due to something in the water, we’ve had an outbreak of pregnancies on campus over the past year. In every case, we’ve had to pay replacements to pick up either the classes or the hours of the woman who went out on leave. There’s a budget line for substitutes, but we’ve already blown well past it for the academic year, and it’s only February.
The college budget hawk, whom I will simply call Money Guy (MG), dropped by my office to express his concern. This is the actual, I-am-not-making-this-up conversation.
MG: DD, you’ve gone well over the allotment for the substitutes line.
DD: That’s true.
MG: What happened?
DD: We had an outbreak of pregnancies, so we’re covering for several maternity leaves.
MG: Well, you’ll need to keep an eye on that.
(pause)
DD: Keep an eye on that?
MG: Right. We can’t keep overspending the lines.
DD: MG, they’re pregnant. What, exactly, do you propose I do about that?
MG: Well, we need to exercise fiscal discipline.
DD: Fiscal discipline? They’re pregnant. What am I supposed to do about that?
MG: (silence)
DD: I didn’t get any of them pregnant. Beyond that, I’m really not sure what you’re asking me to do. Should I send out a memo asking everyone to knock it off?
(pause)
MG: Well, I guess we’ll take it from somewhere else...
The joys of bureaucracy...
Probably due to something in the water, we’ve had an outbreak of pregnancies on campus over the past year. In every case, we’ve had to pay replacements to pick up either the classes or the hours of the woman who went out on leave. There’s a budget line for substitutes, but we’ve already blown well past it for the academic year, and it’s only February.
The college budget hawk, whom I will simply call Money Guy (MG), dropped by my office to express his concern. This is the actual, I-am-not-making-this-up conversation.
MG: DD, you’ve gone well over the allotment for the substitutes line.
DD: That’s true.
MG: What happened?
DD: We had an outbreak of pregnancies, so we’re covering for several maternity leaves.
MG: Well, you’ll need to keep an eye on that.
(pause)
DD: Keep an eye on that?
MG: Right. We can’t keep overspending the lines.
DD: MG, they’re pregnant. What, exactly, do you propose I do about that?
MG: Well, we need to exercise fiscal discipline.
DD: Fiscal discipline? They’re pregnant. What am I supposed to do about that?
MG: (silence)
DD: I didn’t get any of them pregnant. Beyond that, I’m really not sure what you’re asking me to do. Should I send out a memo asking everyone to knock it off?
(pause)
MG: Well, I guess we’ll take it from somewhere else...
The joys of bureaucracy...
Friday, February 18, 2011
The Cheese Stands Alone
Fresh off a glorious Super Bowl victory, the state of Wisconsin is apparently looking at rescinding collective bargaining rights for college and university faculty and professional staff. (At this point, only the Democrats’ hiding ability seems to be stopping it. There’s a metaphor in there somewhere...) For blue-collar workers, as I understand it, it’s looking at restricting the range of collective bargaining to base wages; benefits, working conditions, workloads, and procedures would be off the table.
The move is being presented as budget-driven. The state is facing a serious deficit, and it’s easier to cut labor costs when you don’t have a union. But the real motive is pretty clearly ideological. You don’t need to go nuclear to balance the budget.
Unions can be a pain in the ass, but they can also enable predictability and stability across a large organization (or group of organizations). As painful as contract negotiations can be, they at least cover a boatload of people in one shot; the alternative -- negotiating each contract individually -- can be horribly time-consuming and prone to anomalies. When a union has thoughtful leadership, as opposed to firebrand true believer types, it can be a valuable partner in problem-solving. Having worked with unions in different states, I can attest that a union with smart leadership can actually save a lot of time and effort. (Of course, I can also attest that the wrong leadership will substitute heat for light.)
For my part, I’d far rather work with a unionized faculty than with a tenured one. (As it is, I work with both.) Unions work by the logic of reciprocity, as encoded in contracts. Contract law is well-established. Going to the trouble of establishing procedures upfront, as painful as it can be, can save all kinds of legal trouble later. For that matter, collectively negotiated salary or benefit cuts are less damaging to morale than cuts made on a case-by-case basis. It’s one thing to take one for the team; it’s something else to take a deeper cut so somebody else’s favorite can go unscathed.
The kernel of truth in the attack on unions is that as contracts mature, there’s a tendency for them to try to micromanage from below. Rules like “no staff meetings on Mondays” get put in there because it’s someone’s hobbyhorse and a deadline is approaching; before you know it, you’re stuck with it. Every so often, it’s important to blow out the plaque so actual work can get done. Managers need to be able to manage without being subjected to incessant hostile surveillance and a flurry of frivolous grievances. But what Wisconsin is doing amounts to firing a bazooka at a mosquito. Houses tend to fall when you do that sort of thing.
If Wisconsin were actually primarily concerned with finances, rather than scoring ideological points, it could bring those issues to the table. Certainly, there are times when that has to happen. That’s particularly true with health insurance costs, which continue to grow beyond any reasonable measure, and which are devastating in any labor-intensive enterprise. I don’t disagree with the idea that employees should kick in something for health insurance; I just think the way to get there is to negotiate it. (For what it’s worth, I pay a larger percentage of my salary for my pension and health insurance than the folks in Wisconsin would pay after the governor’s proposal. Some sense of scale in the discussion would be welcome.)
Over the long term, I’m convinced that employer-provided health insurance and pensions are simply unsustainable. The combined cost escalation is such that no-win dilemmas will continue to proliferate until something just breaks. Fighting site-by-site just obscures the larger problem and the obvious larger solution, which is single-payer. Attacking the very movement most likely to get us to the obvious solution is self-defeating. Of course, it’s clear that the governor of Wisconsin knows that, as does the Republican majority in the legislature. From their perspective, the obvious solution is out of the question, so attacking unions is a win-win. That’s why, despite the frustrations I deal with on a daily basis, I have to go with the unions on this one.
Here’s hoping the Democrats find their spines before the troopers find the Democrats...
The move is being presented as budget-driven. The state is facing a serious deficit, and it’s easier to cut labor costs when you don’t have a union. But the real motive is pretty clearly ideological. You don’t need to go nuclear to balance the budget.
Unions can be a pain in the ass, but they can also enable predictability and stability across a large organization (or group of organizations). As painful as contract negotiations can be, they at least cover a boatload of people in one shot; the alternative -- negotiating each contract individually -- can be horribly time-consuming and prone to anomalies. When a union has thoughtful leadership, as opposed to firebrand true believer types, it can be a valuable partner in problem-solving. Having worked with unions in different states, I can attest that a union with smart leadership can actually save a lot of time and effort. (Of course, I can also attest that the wrong leadership will substitute heat for light.)
For my part, I’d far rather work with a unionized faculty than with a tenured one. (As it is, I work with both.) Unions work by the logic of reciprocity, as encoded in contracts. Contract law is well-established. Going to the trouble of establishing procedures upfront, as painful as it can be, can save all kinds of legal trouble later. For that matter, collectively negotiated salary or benefit cuts are less damaging to morale than cuts made on a case-by-case basis. It’s one thing to take one for the team; it’s something else to take a deeper cut so somebody else’s favorite can go unscathed.
The kernel of truth in the attack on unions is that as contracts mature, there’s a tendency for them to try to micromanage from below. Rules like “no staff meetings on Mondays” get put in there because it’s someone’s hobbyhorse and a deadline is approaching; before you know it, you’re stuck with it. Every so often, it’s important to blow out the plaque so actual work can get done. Managers need to be able to manage without being subjected to incessant hostile surveillance and a flurry of frivolous grievances. But what Wisconsin is doing amounts to firing a bazooka at a mosquito. Houses tend to fall when you do that sort of thing.
If Wisconsin were actually primarily concerned with finances, rather than scoring ideological points, it could bring those issues to the table. Certainly, there are times when that has to happen. That’s particularly true with health insurance costs, which continue to grow beyond any reasonable measure, and which are devastating in any labor-intensive enterprise. I don’t disagree with the idea that employees should kick in something for health insurance; I just think the way to get there is to negotiate it. (For what it’s worth, I pay a larger percentage of my salary for my pension and health insurance than the folks in Wisconsin would pay after the governor’s proposal. Some sense of scale in the discussion would be welcome.)
Over the long term, I’m convinced that employer-provided health insurance and pensions are simply unsustainable. The combined cost escalation is such that no-win dilemmas will continue to proliferate until something just breaks. Fighting site-by-site just obscures the larger problem and the obvious larger solution, which is single-payer. Attacking the very movement most likely to get us to the obvious solution is self-defeating. Of course, it’s clear that the governor of Wisconsin knows that, as does the Republican majority in the legislature. From their perspective, the obvious solution is out of the question, so attacking unions is a win-win. That’s why, despite the frustrations I deal with on a daily basis, I have to go with the unions on this one.
Here’s hoping the Democrats find their spines before the troopers find the Democrats...
Thursday, February 17, 2011
Acronym Soup
This confession is really awful for an academic administrator, but it’s true. My brain has run out of space for new acronyms.
Acronym proliferation is out of control. It comes from many sources.
The most obvious is grant-funded programs. For whatever reason, a few decades ago someone decided that every grant-funded program needed a clever, upbeat acronym. As with many awful ideas, it was probably harmless enough at first. But the good ones went fast, and now each new iteration of a program needs its own spiffy new term.
Soon the state- and federally-funded programs followed. Now even local initiatives have to have acronyms.
The same letters tend to pop up in acronyms a lot. You don’t see many x’s, z’s, or f’s.. In the world of electronics, for whatever reason, every acronym has to have the letters c, e, and t. STEM has become an acronym without portfolio, taking on a life of its own as a generic term for science and math. One of my prouder moments as an administrator came last year when I noticed that a particular program was in danger of adopting an acronym that, when pronounced, suggested an unusual sex act. The components of the name were quickly and discreetly rearranged.
Some parts of the college tend to be more acronym-happy than others. Nursing, Teacher Ed, Adult Basic Ed, and Workforce Development tend to be the most prolific generators. The first two are heavily licensed and credentialed, which means you have agencies with acronyms generating programs with acronyms. The latter two are grant-heavy, which means there’s just no escaping it.
Over time, it’s hard to keep all the clever little names straight. They start to blend together. In the right context, too, you’ll sometimes hear three or four of them in the same sentence. Last week I endured one that was structured as follows: “The abab folks are collaborating with the cdcd project over at efef, in hopes of procuring a ghgh grant from ijij.” That’s just a crime against language.
Wise and worldly readers, what’s the worst acronym offense you’ve seen lately?
Acronym proliferation is out of control. It comes from many sources.
The most obvious is grant-funded programs. For whatever reason, a few decades ago someone decided that every grant-funded program needed a clever, upbeat acronym. As with many awful ideas, it was probably harmless enough at first. But the good ones went fast, and now each new iteration of a program needs its own spiffy new term.
Soon the state- and federally-funded programs followed. Now even local initiatives have to have acronyms.
The same letters tend to pop up in acronyms a lot. You don’t see many x’s, z’s, or f’s.. In the world of electronics, for whatever reason, every acronym has to have the letters c, e, and t. STEM has become an acronym without portfolio, taking on a life of its own as a generic term for science and math. One of my prouder moments as an administrator came last year when I noticed that a particular program was in danger of adopting an acronym that, when pronounced, suggested an unusual sex act. The components of the name were quickly and discreetly rearranged.
Some parts of the college tend to be more acronym-happy than others. Nursing, Teacher Ed, Adult Basic Ed, and Workforce Development tend to be the most prolific generators. The first two are heavily licensed and credentialed, which means you have agencies with acronyms generating programs with acronyms. The latter two are grant-heavy, which means there’s just no escaping it.
Over time, it’s hard to keep all the clever little names straight. They start to blend together. In the right context, too, you’ll sometimes hear three or four of them in the same sentence. Last week I endured one that was structured as follows: “The abab folks are collaborating with the cdcd project over at efef, in hopes of procuring a ghgh grant from ijij.” That’s just a crime against language.
Wise and worldly readers, what’s the worst acronym offense you’ve seen lately?
Wednesday, February 16, 2011
Sinecures and Sunsets
Too many management books are written from the perspective of the CEO. Most managers aren’t CEO’s; they’re somewhere in the middle, trying to negotiate between directives from above and facts on the ground below. Reading about Steve Jobs can be fun, but if you’re a regional sales manager, it’s of limited use. He has room to move that you simply don’t.
The same flaw bedevils much of academic culture in discussions of academic administration. “The Administration” is characterized as an eternal monolith, as if everyone in it is part of the hive mind. But that’s simply not true. People come and go, and that necessarily means that they ‘inherit’ arrangements made by predecessors. Sometimes those inheritances are great, and sometimes they’re not. When they’re not, addressing them can be hellaciously difficult.
In a mature organization, you’ll inevitably find a few very comfortable niches that some difficult people have carved for themselves. Typically, someone years ago (and long gone) decided that it was easier to buy peace than it was to continue to fight the same battle, so they bought off a prima donna with some ill-defined sinecure. It solved the immediate problem, but was never really rational from an organizational level. Now, many years later, new administrators are facing much tighter budgets, and that sinecure is starting to look hard to justify.
Good management practice says that you define the desired outcomes before you establish something, and you set out the criteria for evaluating success (or a time-defined sunset clause) in advance. Then, at some reasonable moment, you measure the outcomes against the criteria and decide whether to expand, continue, shrink, or kill it. But the new manager who inherits a sinecure doesn’t have the option of going back in time and doing that. The murky mission has become a part of the organization, with various people filling the explanatory vacuum with reasons that serve purposes of their own. You start to hear phrases like “paid my dues,” “past practice,” and “commitment to...” The arguments for its continued existence hearken back to circumstances from decades past, recalled with frustrating inconsistency.
If you take it on anyway, you get hit with “The Administration is at fault for not defining this upfront.” That would be mildly compelling if The Administration were actually continuous. But the logic of that argument suggests that a mistake made three predecessors ago must stand for all time. It doesn’t make sense. Yes, it would have been better if the sinecure had come with a sunset clause, but it didn’t.
When budgets are relatively flush, these issues aren’t so difficult. You can replace one boondoggle with another, but define the new one more intelligently. Alternately, you can offer buyouts. And sometimes you get lucky and get retirements at the right times. But when budgets are being cut and the retirements don’t happen at the right moments, buying your way out of the problem just isn’t an option.
Wise and worldly readers, there’s an awful gap in the literature that needs to be filled. Have you seen an effective way for someone who inherits a sinecure to bring the sunset?
The same flaw bedevils much of academic culture in discussions of academic administration. “The Administration” is characterized as an eternal monolith, as if everyone in it is part of the hive mind. But that’s simply not true. People come and go, and that necessarily means that they ‘inherit’ arrangements made by predecessors. Sometimes those inheritances are great, and sometimes they’re not. When they’re not, addressing them can be hellaciously difficult.
In a mature organization, you’ll inevitably find a few very comfortable niches that some difficult people have carved for themselves. Typically, someone years ago (and long gone) decided that it was easier to buy peace than it was to continue to fight the same battle, so they bought off a prima donna with some ill-defined sinecure. It solved the immediate problem, but was never really rational from an organizational level. Now, many years later, new administrators are facing much tighter budgets, and that sinecure is starting to look hard to justify.
Good management practice says that you define the desired outcomes before you establish something, and you set out the criteria for evaluating success (or a time-defined sunset clause) in advance. Then, at some reasonable moment, you measure the outcomes against the criteria and decide whether to expand, continue, shrink, or kill it. But the new manager who inherits a sinecure doesn’t have the option of going back in time and doing that. The murky mission has become a part of the organization, with various people filling the explanatory vacuum with reasons that serve purposes of their own. You start to hear phrases like “paid my dues,” “past practice,” and “commitment to...” The arguments for its continued existence hearken back to circumstances from decades past, recalled with frustrating inconsistency.
If you take it on anyway, you get hit with “The Administration is at fault for not defining this upfront.” That would be mildly compelling if The Administration were actually continuous. But the logic of that argument suggests that a mistake made three predecessors ago must stand for all time. It doesn’t make sense. Yes, it would have been better if the sinecure had come with a sunset clause, but it didn’t.
When budgets are relatively flush, these issues aren’t so difficult. You can replace one boondoggle with another, but define the new one more intelligently. Alternately, you can offer buyouts. And sometimes you get lucky and get retirements at the right times. But when budgets are being cut and the retirements don’t happen at the right moments, buying your way out of the problem just isn’t an option.
Wise and worldly readers, there’s an awful gap in the literature that needs to be filled. Have you seen an effective way for someone who inherits a sinecure to bring the sunset?
Tuesday, February 15, 2011
From K-12 On Up...
The Boy and The Girl attend a pretty good public school district. It’s in a working/middle class suburb, and it punches slightly above its socioeconomic weight in test scores. But it’s hardly rich, and it’s not immune to the recession.
Last week the superintendent mentioned at a public meeting (that The Wife attended) that with federal stimulus funds expiring, the district faces a deficit of unprecedented size. She outlined a series of user fees and layoffs that, taken together, might just barely get the job done if things don’t get any worse.
TW came home from that meeting and showed me the documents the superintendent had distributed. I had hoped that with my extensive experience working with crappy budgets in public education, I’d have something useful to contribute. In thinking through the moves we typically make at the college to deal with budget cuts, though, I realized that the K-12 system is fundamentally different. Some elements of the usual higher ed playbook don’t apply.
- Adjuncts. Higher ed routinely balances budgets by using adjuncts. This model doesn’t apply to K-12 in the systems I’ve seen. Yes, they sometimes split an art teacher between two schools, but that’s hardly the same thing.
- Tuition increases. K-12 can get away with user fees for a few things -- sports, clubs, maybe even buses -- but public education does not charge tuition. (Yes, there are exceptions for out-of-district students, but the numbers of those here are negligible.) This means that K-12 districts can’t try to grow their way out of budget troubles. New students bring new costs, but don’t bring corresponding new revenues.
- Cutting sports teams. Politically, that’s much easier at a cc than at a public high school.
- Transportation. We charge for parking; they pay for buses.
- Contract training. We make money on certain workforce development contracts with local companies, in which they pay us for classes for their employees. The profits go into the traditional instructional budget. K-12 doesn’t have that option.
Generally, the K-12 system doesn’t have as much leverage on the “revenue” side, so it has to work more on the “cost” side.
Some cuts are easier to tolerate than others. One of the best uses we made of stimulus funding was to purchase more energy-efficient equipment across campus; the resulting lower utility bills function as budget cuts, but they don’t hurt. Now the stimulus funding is going away, but we can sustain the energy savings going forward. Unfortunately, the K-12 district used the stimulus funding mostly for operating expenses, so with the funding drying up, they’re marooned.
(I’ll admit being surprised at the percentage of their budget that goes to Special Education. We have an Office for Students with Disabilities, which is large and expanding, but the percentage doesn’t come close to what Special Ed costs. I don’t know exactly what there is to be done about that, but the difference was striking.)
Some local districts have outsourced their AP classes to local community colleges, opting for “dual enrollment” courses instead. The students pay the cc tuition and the high school awards dual credit without having to pay a teacher. That can help on the margins, but it takes a while to establish and doesn’t add up to very much. I’m also not sure how the selective colleges that AP students often target would value dual enrollment classes; any readers with direct knowledge of that are invited to comment.
The district is looking at secretaries, assistant coaches, teacher’s aides, and a couple of freshman teams. It’s also looking at fees for sports, clubs, and parking at the high school. There’s a short-term logic to that. Most of those are the variety of cuts whose damage shows up over time, rather than all at once. As we’ve found on my own campus, when you thin out your administration, some things just don’t get done. Over time, those things add up.
Of course, at some level this all involves denial of the basic truth of a catastrophic upward redistribution of income that leads inexorably to straitened resources for public goods. But saying that doesn’t help solve the problem for July. It just helps me cope when I remember that for all the infighting and awful choices, the real issue is a plutocracy that just keeps moving the goalposts, year after year after year.
Last week the superintendent mentioned at a public meeting (that The Wife attended) that with federal stimulus funds expiring, the district faces a deficit of unprecedented size. She outlined a series of user fees and layoffs that, taken together, might just barely get the job done if things don’t get any worse.
TW came home from that meeting and showed me the documents the superintendent had distributed. I had hoped that with my extensive experience working with crappy budgets in public education, I’d have something useful to contribute. In thinking through the moves we typically make at the college to deal with budget cuts, though, I realized that the K-12 system is fundamentally different. Some elements of the usual higher ed playbook don’t apply.
- Adjuncts. Higher ed routinely balances budgets by using adjuncts. This model doesn’t apply to K-12 in the systems I’ve seen. Yes, they sometimes split an art teacher between two schools, but that’s hardly the same thing.
- Tuition increases. K-12 can get away with user fees for a few things -- sports, clubs, maybe even buses -- but public education does not charge tuition. (Yes, there are exceptions for out-of-district students, but the numbers of those here are negligible.) This means that K-12 districts can’t try to grow their way out of budget troubles. New students bring new costs, but don’t bring corresponding new revenues.
- Cutting sports teams. Politically, that’s much easier at a cc than at a public high school.
- Transportation. We charge for parking; they pay for buses.
- Contract training. We make money on certain workforce development contracts with local companies, in which they pay us for classes for their employees. The profits go into the traditional instructional budget. K-12 doesn’t have that option.
Generally, the K-12 system doesn’t have as much leverage on the “revenue” side, so it has to work more on the “cost” side.
Some cuts are easier to tolerate than others. One of the best uses we made of stimulus funding was to purchase more energy-efficient equipment across campus; the resulting lower utility bills function as budget cuts, but they don’t hurt. Now the stimulus funding is going away, but we can sustain the energy savings going forward. Unfortunately, the K-12 district used the stimulus funding mostly for operating expenses, so with the funding drying up, they’re marooned.
(I’ll admit being surprised at the percentage of their budget that goes to Special Education. We have an Office for Students with Disabilities, which is large and expanding, but the percentage doesn’t come close to what Special Ed costs. I don’t know exactly what there is to be done about that, but the difference was striking.)
Some local districts have outsourced their AP classes to local community colleges, opting for “dual enrollment” courses instead. The students pay the cc tuition and the high school awards dual credit without having to pay a teacher. That can help on the margins, but it takes a while to establish and doesn’t add up to very much. I’m also not sure how the selective colleges that AP students often target would value dual enrollment classes; any readers with direct knowledge of that are invited to comment.
The district is looking at secretaries, assistant coaches, teacher’s aides, and a couple of freshman teams. It’s also looking at fees for sports, clubs, and parking at the high school. There’s a short-term logic to that. Most of those are the variety of cuts whose damage shows up over time, rather than all at once. As we’ve found on my own campus, when you thin out your administration, some things just don’t get done. Over time, those things add up.
Of course, at some level this all involves denial of the basic truth of a catastrophic upward redistribution of income that leads inexorably to straitened resources for public goods. But saying that doesn’t help solve the problem for July. It just helps me cope when I remember that for all the infighting and awful choices, the real issue is a plutocracy that just keeps moving the goalposts, year after year after year.
Monday, February 14, 2011
Rethinking Skype Interviews
Last week’s piece in IHE by “Young Philosopher” about replacing first-round conference interviews with Skype interviews has stuck in my craw for the last few days. I’m increasingly convinced that he’s on to something, but with a few key qualifications.
(I have no ‘brand loyalty’ on this one. I’ll just refer to Skype because it’s convenient, but any synchronous, interactive web video platform would accomplish the same thing.)
Last Fall, in response to a reader question, I mentioned that I’d never seen a candidate who had only done Skype interviews actually “win” a faculty search. But I have to admit some second thoughts since then.
The hole in my logic was that I was assuming that ‘distance’ interviewees were necessarily competing with ‘in person’ interviewees. That has actually been the case on the ground thus far, but it doesn’t have to be. And thinking back, I’ve actually been interviewed by old-fashioned telephone in the first round a few times, and some of those resulted in in-person followups. If old-fashioned telephone works, I don’t know why Skype couldn’t.
The key is consistency.
Typically, faculty interviews occur in two rounds. The first round brings in 8-10 candidates, usually some local and some from a distance. That group gets winnowed down to (usually) three finalists who are invited back for a second round.
I’m still unwilling to give up on the second round being done in person. Especially for distance candidates, the opportunity to see the campus itself, to walk around it and get a flavor of the place, is crucial. I’ve had enough experiences of “this wasn’t what I expected” that I wouldn’t want to give up on that reality check.
But for the first round, I could imagine holding every interview via Skype, even the local ones. That way, every candidate is on a level playing field. We can save the expensive and time-consuming reality check for the finalists.
The major advantage of moving to Skype for first-round interviews is cost. Flying people in and putting them up in hotels is a serious cost. That’s especially true when flights have to be booked on short notice. For candidates juggling multiple interviews -- yes, it happens, even in this market -- the time commitments are substantial. But anyone who actually wants a job should be able to block out an hour at some point for a distance conversation; if they can’t even be bothered to do that, then I know what I need to know.
(This seems to be a difference from the situation YP describes, in which first round interviews are routinely conducted at a regional/national conference. For a host of reasons, including cost, scheduling conflicts, and differences among disciplines, we haven’t done that. YP’s proposal assumes poor graduate students trekking to a conference in hopes of getting interviewed; the cost savings from Skype would accrue to the graduate students. Here, we’ve always paid for candidates to come to us, so the savings would accrue to the college.)
Admittedly, it would be a little awkward to interview incumbent adjuncts or local candidates via Skype. But that seems like a small price to pay for consistency. Comparing an in-person candidate to a distance candidate introduces a glaring measurement error; if everyone is on a level playing field, then at least nobody is gaining an undue advantage.
I can imagine two potentially significant problems. The first is teaching demos. I’m not sure how well teaching demos would work over Skype. We’ve typically included teaching demos in the first round, since for a teaching institution there’s simply no point in putting a candidate forward who isn’t effective as a teacher. Virtual teaching demos could be pretty misleading, since they wouldn’t really approximate either a real classroom setting or an online class. There’s probably a way around this, but I haven’t seen it or figured it out.
The other major disadvantage that leaps to mind is the less-than-perfect reliability of internet video. Those of us old enough to remember Max Headroom have probably had the occasional flashback while trying to converse on Skype. Interviews can be relatively tense on a good day; throw in random technical glitches, and suddenly you’re basing decisions on the vagaries of bandwidth.
The second objection strikes me as mostly temporary, though, given the speed of improvement of these things. The first is tougher, but I wouldn’t be shocked to see someone figure out a reasonable way to work around it.
With travel costs continuing to climb and budget pressures continuing to mount, the logic of Skype-type interviews -- at least for the first round -- is becoming more persuasive. Wise and worldly readers, is there something I’m overlooking? Alternately, have you tried the all-Skype route on your campus?
(I have no ‘brand loyalty’ on this one. I’ll just refer to Skype because it’s convenient, but any synchronous, interactive web video platform would accomplish the same thing.)
Last Fall, in response to a reader question, I mentioned that I’d never seen a candidate who had only done Skype interviews actually “win” a faculty search. But I have to admit some second thoughts since then.
The hole in my logic was that I was assuming that ‘distance’ interviewees were necessarily competing with ‘in person’ interviewees. That has actually been the case on the ground thus far, but it doesn’t have to be. And thinking back, I’ve actually been interviewed by old-fashioned telephone in the first round a few times, and some of those resulted in in-person followups. If old-fashioned telephone works, I don’t know why Skype couldn’t.
The key is consistency.
Typically, faculty interviews occur in two rounds. The first round brings in 8-10 candidates, usually some local and some from a distance. That group gets winnowed down to (usually) three finalists who are invited back for a second round.
I’m still unwilling to give up on the second round being done in person. Especially for distance candidates, the opportunity to see the campus itself, to walk around it and get a flavor of the place, is crucial. I’ve had enough experiences of “this wasn’t what I expected” that I wouldn’t want to give up on that reality check.
But for the first round, I could imagine holding every interview via Skype, even the local ones. That way, every candidate is on a level playing field. We can save the expensive and time-consuming reality check for the finalists.
The major advantage of moving to Skype for first-round interviews is cost. Flying people in and putting them up in hotels is a serious cost. That’s especially true when flights have to be booked on short notice. For candidates juggling multiple interviews -- yes, it happens, even in this market -- the time commitments are substantial. But anyone who actually wants a job should be able to block out an hour at some point for a distance conversation; if they can’t even be bothered to do that, then I know what I need to know.
(This seems to be a difference from the situation YP describes, in which first round interviews are routinely conducted at a regional/national conference. For a host of reasons, including cost, scheduling conflicts, and differences among disciplines, we haven’t done that. YP’s proposal assumes poor graduate students trekking to a conference in hopes of getting interviewed; the cost savings from Skype would accrue to the graduate students. Here, we’ve always paid for candidates to come to us, so the savings would accrue to the college.)
Admittedly, it would be a little awkward to interview incumbent adjuncts or local candidates via Skype. But that seems like a small price to pay for consistency. Comparing an in-person candidate to a distance candidate introduces a glaring measurement error; if everyone is on a level playing field, then at least nobody is gaining an undue advantage.
I can imagine two potentially significant problems. The first is teaching demos. I’m not sure how well teaching demos would work over Skype. We’ve typically included teaching demos in the first round, since for a teaching institution there’s simply no point in putting a candidate forward who isn’t effective as a teacher. Virtual teaching demos could be pretty misleading, since they wouldn’t really approximate either a real classroom setting or an online class. There’s probably a way around this, but I haven’t seen it or figured it out.
The other major disadvantage that leaps to mind is the less-than-perfect reliability of internet video. Those of us old enough to remember Max Headroom have probably had the occasional flashback while trying to converse on Skype. Interviews can be relatively tense on a good day; throw in random technical glitches, and suddenly you’re basing decisions on the vagaries of bandwidth.
The second objection strikes me as mostly temporary, though, given the speed of improvement of these things. The first is tougher, but I wouldn’t be shocked to see someone figure out a reasonable way to work around it.
With travel costs continuing to climb and budget pressures continuing to mount, the logic of Skype-type interviews -- at least for the first round -- is becoming more persuasive. Wise and worldly readers, is there something I’m overlooking? Alternately, have you tried the all-Skype route on your campus?
Friday, February 11, 2011
Not Achieving the Dream
Achieving the Dream is an initiative sponsored by the Lumina Foundation and spearheaded by one of my personal heroes, Kay McClenney. It’s an attempt to get community colleges across the country to build ‘cultures of evidence’ about student success. It relies heavily on data-driven decisionmaking, with the goal of prodding colleges to move from the ways things have always been done to the ways that things actually succeed. It’s a great idea, and I’m a fan. (For the record, my college is not an ATD school.)
That said, though, I can’t say I’m shocked at this report. Apparently, a national study has found that colleges that have signed on to ATD have not seen statistically significant gains in any of the measures used to gauge success.
Although my college is not an ATD school, it is working diligently on a number of similar measures to improve student success rates. Here, too, the results so far have been disappointing. And we have one of the better Institutional Research offices around.
Assuming the presence of a strong IR staff, good Presidential support, thoughtfully-constructed interventions, and broad agreement on the overall goal -- all of which are present here -- why aren’t we moving the needle?
I’ll answer the question with another question. Good, strong, solid, peer-reviewed scientific data has made it abundantly clear that poor eating habits lead to obesity and all manner of negative health outcomes. There’s no serious dispute that obesity is a major public health issue in the US. And yet people still overeat. Despite reams of publicity and even Presidential support for good eating and exercise habits, obesity continues to increase. Why?
Sometimes it’s more than a matter of knowing where the problem is.
For example, in the case of student success, there’s the fundamental problem of thin budgets. I’ve seen data suggesting that higher percentages of full-time faculty lead to better student outcomes, and I assume that there’s some truth to that. But we can afford only what we can afford. Knowing that a major increase in the instructional budget might help is of only theoretical interest when we’re taking year after year of operating budget cuts. We’ve shifted money around internally to keep the faculty numbers from slipping, but they haven’t grown, and enrollments have. (And the few remaining deans are stretched so thin that talk of quitting is becoming endemic.)
Thin budgets also manifest themselves in ‘boutique’ interventions that don’t scale up. On my own campus, we’ve had great results with several very labor-intensive programs: supplemental instruction, summer bridge programs, that sort of thing. They’re terrific for the handful of students who have access to them. But we have nothing close to the budget it would take to make those available to all, or even most, students. So we can get good percentage improvements in targeted areas, but the overall numbers don’t really move.
There’s also a fundamental issue of control. Faculty as a group are intensely protective of their absolute control of the classroom. Many hold on to the premodern notion of teaching as a craft, to be practiced and judged solely by members the guild. As with the sabermetric revolution in baseball, old habits die hard, even when the evidence against them is clear and compelling. There’s a real fear among many faculty that moving from “because I say so” to “what the numbers say” will reduce their authority, and in a certain sense, that’s true. In my estimation, this is at the root of much of the resentment against outcomes assessment.
Even where there’s a will, sometimes there just isn’t the time. It’s one thing to reinvent your teaching when you have one class or even two; it’s quite another with five. And when so many of your professors divide their time among different employers, even getting folks into the same room for workshops is a logistical challenge.
Of course, accountability matters. Longtime readers know my position on the tenure system, so I won’t beat that horse again, but it’s an uphill battle to sell disruptive change when people have the option of saying ‘no’ without consequence. The enemy isn’t really direct opposition; it’s foot-dragging.
ATD doesn’t address internal politics of colleges as institutions. That’s entirely fair -- they vary by location, and it would probably kill the project altogether -- but anyone who has tried to make headway on these issues can attest that internal politics can kill almost anything. Short of a massive exogenous shock to the system, it’s hard to imagine what will change that.
More darkly, there’s the unspoken truth that some students will just never make it. Depending on your angle to the universe, the meaning of “some” will vary; I’ve heard serious people argue earnestly that the pass rates we currently have are simply the best we can get, given the students we get. It’s hard not to notice that selective institutions have consistently higher student success rates, even when they herd their students into 300-seat lectures taught by graduate students. When you have open-door admissions, you can’t repackage failure as ‘selectivity;’ instead, you have to own it and get blamed for it. Selective institutions can outsource failure; we don’t have that option.
It’s possible to take the study on ATD as vindication for a sort of fatalism, but I think that would be a mistake. I’m not Panglossian enough to assume that this is the best of all possible worlds. In fact, longtime readers may have seen me make suggests for improvements from time to time. And it strikes me as obviously correct to base strategies for improvement on actual empirical evidence than on unthinking adherence to tradition or, alternately, watered-down caricatures of an idealized corporation. My guess is that we’re only beginning to grapple with some of the deeper issues, many of which will require much more disruptive change than most people suspected at the outset. Whether public institutions have the courage to do that, or whether for-profit competitors will swoop in and eat our collective lunch, I don’t know. But if we’re serious, we’d be well advised to attend even more assiduously to reality-based reform.
That said, though, I can’t say I’m shocked at this report. Apparently, a national study has found that colleges that have signed on to ATD have not seen statistically significant gains in any of the measures used to gauge success.
Although my college is not an ATD school, it is working diligently on a number of similar measures to improve student success rates. Here, too, the results so far have been disappointing. And we have one of the better Institutional Research offices around.
Assuming the presence of a strong IR staff, good Presidential support, thoughtfully-constructed interventions, and broad agreement on the overall goal -- all of which are present here -- why aren’t we moving the needle?
I’ll answer the question with another question. Good, strong, solid, peer-reviewed scientific data has made it abundantly clear that poor eating habits lead to obesity and all manner of negative health outcomes. There’s no serious dispute that obesity is a major public health issue in the US. And yet people still overeat. Despite reams of publicity and even Presidential support for good eating and exercise habits, obesity continues to increase. Why?
Sometimes it’s more than a matter of knowing where the problem is.
For example, in the case of student success, there’s the fundamental problem of thin budgets. I’ve seen data suggesting that higher percentages of full-time faculty lead to better student outcomes, and I assume that there’s some truth to that. But we can afford only what we can afford. Knowing that a major increase in the instructional budget might help is of only theoretical interest when we’re taking year after year of operating budget cuts. We’ve shifted money around internally to keep the faculty numbers from slipping, but they haven’t grown, and enrollments have. (And the few remaining deans are stretched so thin that talk of quitting is becoming endemic.)
Thin budgets also manifest themselves in ‘boutique’ interventions that don’t scale up. On my own campus, we’ve had great results with several very labor-intensive programs: supplemental instruction, summer bridge programs, that sort of thing. They’re terrific for the handful of students who have access to them. But we have nothing close to the budget it would take to make those available to all, or even most, students. So we can get good percentage improvements in targeted areas, but the overall numbers don’t really move.
There’s also a fundamental issue of control. Faculty as a group are intensely protective of their absolute control of the classroom. Many hold on to the premodern notion of teaching as a craft, to be practiced and judged solely by members the guild. As with the sabermetric revolution in baseball, old habits die hard, even when the evidence against them is clear and compelling. There’s a real fear among many faculty that moving from “because I say so” to “what the numbers say” will reduce their authority, and in a certain sense, that’s true. In my estimation, this is at the root of much of the resentment against outcomes assessment.
Even where there’s a will, sometimes there just isn’t the time. It’s one thing to reinvent your teaching when you have one class or even two; it’s quite another with five. And when so many of your professors divide their time among different employers, even getting folks into the same room for workshops is a logistical challenge.
Of course, accountability matters. Longtime readers know my position on the tenure system, so I won’t beat that horse again, but it’s an uphill battle to sell disruptive change when people have the option of saying ‘no’ without consequence. The enemy isn’t really direct opposition; it’s foot-dragging.
ATD doesn’t address internal politics of colleges as institutions. That’s entirely fair -- they vary by location, and it would probably kill the project altogether -- but anyone who has tried to make headway on these issues can attest that internal politics can kill almost anything. Short of a massive exogenous shock to the system, it’s hard to imagine what will change that.
More darkly, there’s the unspoken truth that some students will just never make it. Depending on your angle to the universe, the meaning of “some” will vary; I’ve heard serious people argue earnestly that the pass rates we currently have are simply the best we can get, given the students we get. It’s hard not to notice that selective institutions have consistently higher student success rates, even when they herd their students into 300-seat lectures taught by graduate students. When you have open-door admissions, you can’t repackage failure as ‘selectivity;’ instead, you have to own it and get blamed for it. Selective institutions can outsource failure; we don’t have that option.
It’s possible to take the study on ATD as vindication for a sort of fatalism, but I think that would be a mistake. I’m not Panglossian enough to assume that this is the best of all possible worlds. In fact, longtime readers may have seen me make suggests for improvements from time to time. And it strikes me as obviously correct to base strategies for improvement on actual empirical evidence than on unthinking adherence to tradition or, alternately, watered-down caricatures of an idealized corporation. My guess is that we’re only beginning to grapple with some of the deeper issues, many of which will require much more disruptive change than most people suspected at the outset. Whether public institutions have the courage to do that, or whether for-profit competitors will swoop in and eat our collective lunch, I don’t know. But if we’re serious, we’d be well advised to attend even more assiduously to reality-based reform.
Thursday, February 10, 2011
Pep Talks
This one is both a confession and a thank you.
Every once in a while, the level of toxicity in this role gets high enough that I have to seek out some colleagues, close the door, and get a pep talk. There’s just no other way to stay sane.
The best pep talks manage to combine a view of the big picture with just enough credible observations of strengths to make it seem manageable. They’re about the situation, as seen from a helpful distance.
When you’re “in the weeds,” as my new favorite saying goes, it can be hard to see the point. That happens from time to time, and those of us who last learn to tough out most of them. But when the weeds are especially thick and have thorns on them, and seem to go on forever, it can be hard not to lose the path. Having someone standing on the outside telling you “it’s that way, idiot!” can make a real difference.
This is where transparency hits its useful limits.
The worst situations feature glaring gaps between what gets said in public and what’s really going on. Proxy issues and shadowboxing divert energy from the task at hand, and frequently cause issues of their own. By the time you get to the third derivative of what you were originally actually talking about, it can be a challenge not to get lost in the curlicues. The problem is that cutting through the tertiary issues too abruptly in public would simply fuel the fire. Behind closed doors, though, you can speak the actual truth. Yes, some of it will be venting, but that’s a necessary part of the process. Sometimes venting can actually help you realize that there’s more to the picture, since saying it out loud makes it harder to skip the leaps in logic. And sometimes it just helps to answer the nagging question “am I the only one who sees how ridiculous this is?”
I’ve both given and received pep talks, and can honestly say that both roles can be gratifying. Receiving a pep talk at the right time can restore needed perspective. Giving a pep talk at the right time is sort of like teaching; there’s something gratifying in watching the lightbulb go on over somebody’s head.
So this is a confession that sometimes I need the pep talk, and a thank you to the folks who provided them recently. Sometimes we all need to be reminded of our better selves.
Every once in a while, the level of toxicity in this role gets high enough that I have to seek out some colleagues, close the door, and get a pep talk. There’s just no other way to stay sane.
The best pep talks manage to combine a view of the big picture with just enough credible observations of strengths to make it seem manageable. They’re about the situation, as seen from a helpful distance.
When you’re “in the weeds,” as my new favorite saying goes, it can be hard to see the point. That happens from time to time, and those of us who last learn to tough out most of them. But when the weeds are especially thick and have thorns on them, and seem to go on forever, it can be hard not to lose the path. Having someone standing on the outside telling you “it’s that way, idiot!” can make a real difference.
This is where transparency hits its useful limits.
The worst situations feature glaring gaps between what gets said in public and what’s really going on. Proxy issues and shadowboxing divert energy from the task at hand, and frequently cause issues of their own. By the time you get to the third derivative of what you were originally actually talking about, it can be a challenge not to get lost in the curlicues. The problem is that cutting through the tertiary issues too abruptly in public would simply fuel the fire. Behind closed doors, though, you can speak the actual truth. Yes, some of it will be venting, but that’s a necessary part of the process. Sometimes venting can actually help you realize that there’s more to the picture, since saying it out loud makes it harder to skip the leaps in logic. And sometimes it just helps to answer the nagging question “am I the only one who sees how ridiculous this is?”
I’ve both given and received pep talks, and can honestly say that both roles can be gratifying. Receiving a pep talk at the right time can restore needed perspective. Giving a pep talk at the right time is sort of like teaching; there’s something gratifying in watching the lightbulb go on over somebody’s head.
So this is a confession that sometimes I need the pep talk, and a thank you to the folks who provided them recently. Sometimes we all need to be reminded of our better selves.
Wednesday, February 09, 2011
Conversations I Never Hear
I overhear a fair amount of student conversation, just walking the hallways and occasionally eating in the cafeteria.
Words I haven’t heard: Egypt, Mubarek, Obama, oil, revolution, war.
Words I have heard: class, facebook, job, work, girlfriend, assorted cursing
Admittedly, this is an unscientific sample, and far from comprehensive. Somewhere, someone may be having an earnest, searching discussion of, say, American foreign policy. But I haven’t seen or heard it.
Although my generation was judged disappointingly apolitical by the one before it, I recall plenty of political conversation among students at, say, lunch, in my time at college in the late 1980’s. That’s certainly not to deny the presence of other concerns -- sex, parties, and in-jokes were mainstays -- but it wasn’t odd to overhear students talking about elections, or the latest political controversy, or some new horrifying or exciting historical event they’d just discovered. Many of the comments were either glib or retrospectively horrifying in their naivete, but hey, at least we were trying.
(Compared to some of the current cable news punditocracy, though, I’ll take the naivete, thanks.)
As callow as much of the discussion was, at least there was some sense of entitlement to discuss big issues. Some of the heat in the less pleasant conversations stemmed from a sense, right or wrong, that how we understood something actually mattered. We assumed a certain standing to address Big Questions.
I don’t know to what extent the apparently complete absence of that kind of discussion here is generational and how much is class-based; the average familly income of students at my cc is nothing close to what it was back at SLAC. But the difference is striking, and I worry about it.
Politics makes great fodder for developing critical thinking skills, since it’s shot through with ambiguity and conflicting points of view. It’s also well-suited for developing communication skills; thoughtful political discussion takes practice. Even with practice, most of us with fairly distinct points of view (hi!) can sometimes slip into impatient dismissiveness, just out of frustration. For nineteen-year-olds who haven’t given politics much thought, the whole enterprise may well look like the most boring and inscrutable spectator sport ever.
But it shouldn’t be. Politics makes good subject matter for building certain skills, but the substance also matters in itself. In my more idealistic moments, I like to imagine that part of what colleges do is equip students to be thoughtful citizens of a republic. Part of the reason that student politics historically have tended toward the callow and strident is precisely that college is where many of them are grappling with difficult ideas for the first time. Those initial efforts are bound to be awkward; it would be surprising if they weren’t. The idea is to have those embarrassing early attempts happen in a relatively safe environment, so that as the students move on with life, they can develop more thoughtful perspectives.
If there’s any truth to that -- and I have to believe that there is -- then skipping that crucial early step will have consequences. They won’t have had the experience of long-form political bullshitting, in which they follow an idea until it runs out of gas. (There’s something really humbling about that.) They won’t have found themselves in the awkward position of discovering a flaw in an idea they had espoused with great passion. (In my experience, it leads to that same burst of cold that hits you right after you realize that you left your wallet at home.) At most, they may have experienced politics as a particularly mean and pointless source of irresolvable conflict.
Which it can be. But it can also be more than that.
I hope that this is just me showing the same generational deafness as the Boomers showed my cohort; somewhere under the surface -- maybe online? -- students are having passionate political debates. If that’s all it is, then I happily plead guilty to oblivousness and creeping fogeyism. But I don’t get that impression. Instead, I suspect that the disconnect from politics is either class-based or generational, and I’m not sure which is worse. If it’s class-based, then I expect the one-sided class warfare of our politics to get even worse over time, with tragic consequences. If it’s generational, and even the rich kids can’t be bothered, then I don’t know what will hold up the system. Yes, 2008 supposedly featured an unusually high youth voter turnout, but I haven’t seen any signs of actual political engagement since then.
Wise and worldly readers, I hope I’m just out to lunch on this one. Are folks at cc’s also not seeing what I’m not seeing? Are folks at more elite/wealthy institutions seeing political engagement among students?
Words I haven’t heard: Egypt, Mubarek, Obama, oil, revolution, war.
Words I have heard: class, facebook, job, work, girlfriend, assorted cursing
Admittedly, this is an unscientific sample, and far from comprehensive. Somewhere, someone may be having an earnest, searching discussion of, say, American foreign policy. But I haven’t seen or heard it.
Although my generation was judged disappointingly apolitical by the one before it, I recall plenty of political conversation among students at, say, lunch, in my time at college in the late 1980’s. That’s certainly not to deny the presence of other concerns -- sex, parties, and in-jokes were mainstays -- but it wasn’t odd to overhear students talking about elections, or the latest political controversy, or some new horrifying or exciting historical event they’d just discovered. Many of the comments were either glib or retrospectively horrifying in their naivete, but hey, at least we were trying.
(Compared to some of the current cable news punditocracy, though, I’ll take the naivete, thanks.)
As callow as much of the discussion was, at least there was some sense of entitlement to discuss big issues. Some of the heat in the less pleasant conversations stemmed from a sense, right or wrong, that how we understood something actually mattered. We assumed a certain standing to address Big Questions.
I don’t know to what extent the apparently complete absence of that kind of discussion here is generational and how much is class-based; the average familly income of students at my cc is nothing close to what it was back at SLAC. But the difference is striking, and I worry about it.
Politics makes great fodder for developing critical thinking skills, since it’s shot through with ambiguity and conflicting points of view. It’s also well-suited for developing communication skills; thoughtful political discussion takes practice. Even with practice, most of us with fairly distinct points of view (hi!) can sometimes slip into impatient dismissiveness, just out of frustration. For nineteen-year-olds who haven’t given politics much thought, the whole enterprise may well look like the most boring and inscrutable spectator sport ever.
But it shouldn’t be. Politics makes good subject matter for building certain skills, but the substance also matters in itself. In my more idealistic moments, I like to imagine that part of what colleges do is equip students to be thoughtful citizens of a republic. Part of the reason that student politics historically have tended toward the callow and strident is precisely that college is where many of them are grappling with difficult ideas for the first time. Those initial efforts are bound to be awkward; it would be surprising if they weren’t. The idea is to have those embarrassing early attempts happen in a relatively safe environment, so that as the students move on with life, they can develop more thoughtful perspectives.
If there’s any truth to that -- and I have to believe that there is -- then skipping that crucial early step will have consequences. They won’t have had the experience of long-form political bullshitting, in which they follow an idea until it runs out of gas. (There’s something really humbling about that.) They won’t have found themselves in the awkward position of discovering a flaw in an idea they had espoused with great passion. (In my experience, it leads to that same burst of cold that hits you right after you realize that you left your wallet at home.) At most, they may have experienced politics as a particularly mean and pointless source of irresolvable conflict.
Which it can be. But it can also be more than that.
I hope that this is just me showing the same generational deafness as the Boomers showed my cohort; somewhere under the surface -- maybe online? -- students are having passionate political debates. If that’s all it is, then I happily plead guilty to oblivousness and creeping fogeyism. But I don’t get that impression. Instead, I suspect that the disconnect from politics is either class-based or generational, and I’m not sure which is worse. If it’s class-based, then I expect the one-sided class warfare of our politics to get even worse over time, with tragic consequences. If it’s generational, and even the rich kids can’t be bothered, then I don’t know what will hold up the system. Yes, 2008 supposedly featured an unusually high youth voter turnout, but I haven’t seen any signs of actual political engagement since then.
Wise and worldly readers, I hope I’m just out to lunch on this one. Are folks at cc’s also not seeing what I’m not seeing? Are folks at more elite/wealthy institutions seeing political engagement among students?
Tuesday, February 08, 2011
Disclosure
Actual conversation from last night, at the kitchen table. The Girl is writing out her Valentine’s Day cards, and The Boy is working on a report on Thomas Edison.
The Wife: TB, do you have to do valentines?
TB: Well, we don’t have to, but we can. If we do, we have to do one for everyone in the class.
TW: Do you want to?
TB: (shrugging): I guess so.
TW: Is there anyone outside of your class you’d like to give one to?
(pause)
TB (smirking): I choose not to disclose that information.
The Wife: TB, do you have to do valentines?
TB: Well, we don’t have to, but we can. If we do, we have to do one for everyone in the class.
TW: Do you want to?
TB: (shrugging): I guess so.
TW: Is there anyone outside of your class you’d like to give one to?
(pause)
TB (smirking): I choose not to disclose that information.
Monday, February 07, 2011
Refreezing
I’ll skip yet another weather-related rant, except to use it as a metaphor. Those of us in chilly climes know that a warm day in winter is very much a mixed blessing; yes, it helps clear the backlog of snow and ice, but inevitably some of the resulting water is blocked from going where it should, so it refreezes. Refrozen stuff is often even worse than the original, since it’s smoother and harder to see. (The usual term of art is “black ice,” since you can see black pavement underneath.)
I’m wondering if there’s a way to prevent refreezing of campus initiatives.
I’ve been through this cycle enough times now to recognize it. Someone proposes something innovative. It gets support, grows, gets more support, and becomes a regular part of what we do.
Then the refreeze hits. The original spirit of innovation is lost, the thing hardens, and what was once daringly responsive to new conditions has become dogmatic and brittle.
This sort of thing happens in the real world all the time. Some innovators keep moving, but too many stop trying once they’ve found something that worked. In a competitive marketplace, standing pat for too long is a sure recipe for failure, as hungry new competitors will come along and seize the opportunities with which you couldn’t be bothered.
On campus, though, the lack of a meaningfully competitive internal marketplace can lead to old dogmas far outliving their time, and even starving promising new ideas of resources.
In the latest version of this dilemma, a program that was legitimately daring and new when it began, decades ago, is starting to look like just another interest group. It has been called ‘innovative’ for so long that many of its partisans simply equate ‘innovation’ with the project, and therefore assume that any redirection of resources away from it is, by definition, an attack on innovation.
To extend the ‘warmth’ metaphor, of course, a sustained period of fiscal warming would melt the ice. With enough resources that the college didn’t have to choose between new and old, but could do both, the dilemma would mostly go away. But I don’t see that happening.
Worse, too many internal constituencies are wrapped up in a worship of “past practice,” not realizing that changes from past practice are exactly the point. If past practice were still convincing, we wouldn’t need innovation. But the world changes, new possibilities emerge, and stasis is not a serious answer. Experiments can’t be negotiated and spelled out in advance; that’s why they’re experiments. Cutting down the future to the size of the present is a crime against possibility. Black ice isn’t the answer; it’s part of the problem.
Wise and worldly readers, has your campus or business or organization found a way to encourage the continued cycle of innovation without falling prey to repeated refreezes?
I’m wondering if there’s a way to prevent refreezing of campus initiatives.
I’ve been through this cycle enough times now to recognize it. Someone proposes something innovative. It gets support, grows, gets more support, and becomes a regular part of what we do.
Then the refreeze hits. The original spirit of innovation is lost, the thing hardens, and what was once daringly responsive to new conditions has become dogmatic and brittle.
This sort of thing happens in the real world all the time. Some innovators keep moving, but too many stop trying once they’ve found something that worked. In a competitive marketplace, standing pat for too long is a sure recipe for failure, as hungry new competitors will come along and seize the opportunities with which you couldn’t be bothered.
On campus, though, the lack of a meaningfully competitive internal marketplace can lead to old dogmas far outliving their time, and even starving promising new ideas of resources.
In the latest version of this dilemma, a program that was legitimately daring and new when it began, decades ago, is starting to look like just another interest group. It has been called ‘innovative’ for so long that many of its partisans simply equate ‘innovation’ with the project, and therefore assume that any redirection of resources away from it is, by definition, an attack on innovation.
To extend the ‘warmth’ metaphor, of course, a sustained period of fiscal warming would melt the ice. With enough resources that the college didn’t have to choose between new and old, but could do both, the dilemma would mostly go away. But I don’t see that happening.
Worse, too many internal constituencies are wrapped up in a worship of “past practice,” not realizing that changes from past practice are exactly the point. If past practice were still convincing, we wouldn’t need innovation. But the world changes, new possibilities emerge, and stasis is not a serious answer. Experiments can’t be negotiated and spelled out in advance; that’s why they’re experiments. Cutting down the future to the size of the present is a crime against possibility. Black ice isn’t the answer; it’s part of the problem.
Wise and worldly readers, has your campus or business or organization found a way to encourage the continued cycle of innovation without falling prey to repeated refreezes?
Friday, February 04, 2011
Confetti on the Table
- Twitter’s coverage of the revolution in Egypt has been revelatory. Jillian York (@jilliancyork) has singlehandedly done a better job than all of the tv networks and newspapers combined. I started doing Twitter as a lark, but it’s really proving itself. Katrina Gulliver (@katrinagulliver) is emerging as a breakout star of the medium. Highly recommended.
- The dining room table is covered in what looks like confetti. Since I’ve been struggling with the organization of the book, I decided to write ideas down on small strips of paper, then arrange the strips into groups as they made sense. TW has been a good sport, but I’m acutely aware that I’ve only used half of the strips thus far, and have already covered an entire placesetting. My goal is to get everything in reasonable order without having to add the leaf. Luckily, it’s cold enough that I don’t think there’s much danger of anybody opening a window.
- Okay, enough snow days. Seriously. Meeting schedules and deadlines are complete disasters at this point, and there’s really no place left to put snow. The kids and the dog are getting cabin fever, and I don’t even want to think about when their summer vacation will start. A snowblower should not be an everyday tool, and the daily battle between the road plow and the bottom of the driveway feels like an allegory for something.
- Why doesn’t Amazon have a Kindle app for WebOS? I know WebOS isn’t huge, but how hard could it be? With Apple getting more territorial about taking a cut of everything, it seems like diversifying the platforms would make sense. If not a full-blown app, then at least an easily used mobile-scaled website for Kindle.
- Patton Oswalt’s Zombies, Spaceships, Wasteland is better than I expected. He captures the frustration of being a basically decent person trapped in an absurd situation, whether it’s a doomed multiplex in high school or a tragic comedy club in Canada. The book has its share of jokey bits, but I was impressed at how reflective most of it was. Several chapters really capture the sort of walking sadness I remember vividly from my own teen years. His reflections on male nerd culture at a certain historical moment ring true. Thoughtfully done.
- The Boy won a certificate for the President’s physical fitness challenge. Mercifully, it never occurred to him to ask if I had...
- Life’s awkward moments: someone on campus recently recommended that I start reading “a blog by this guy, Dean Dad.” I nodded, and smiled noncommittally.
- This weekend, we are all cheeseheads. The Onion pretty much nailed it with its headline “Ben Roethlisberger One Win Away from Being a Good Person.” Go, Packers!
- The dining room table is covered in what looks like confetti. Since I’ve been struggling with the organization of the book, I decided to write ideas down on small strips of paper, then arrange the strips into groups as they made sense. TW has been a good sport, but I’m acutely aware that I’ve only used half of the strips thus far, and have already covered an entire placesetting. My goal is to get everything in reasonable order without having to add the leaf. Luckily, it’s cold enough that I don’t think there’s much danger of anybody opening a window.
- Okay, enough snow days. Seriously. Meeting schedules and deadlines are complete disasters at this point, and there’s really no place left to put snow. The kids and the dog are getting cabin fever, and I don’t even want to think about when their summer vacation will start. A snowblower should not be an everyday tool, and the daily battle between the road plow and the bottom of the driveway feels like an allegory for something.
- Why doesn’t Amazon have a Kindle app for WebOS? I know WebOS isn’t huge, but how hard could it be? With Apple getting more territorial about taking a cut of everything, it seems like diversifying the platforms would make sense. If not a full-blown app, then at least an easily used mobile-scaled website for Kindle.
- Patton Oswalt’s Zombies, Spaceships, Wasteland is better than I expected. He captures the frustration of being a basically decent person trapped in an absurd situation, whether it’s a doomed multiplex in high school or a tragic comedy club in Canada. The book has its share of jokey bits, but I was impressed at how reflective most of it was. Several chapters really capture the sort of walking sadness I remember vividly from my own teen years. His reflections on male nerd culture at a certain historical moment ring true. Thoughtfully done.
- The Boy won a certificate for the President’s physical fitness challenge. Mercifully, it never occurred to him to ask if I had...
- Life’s awkward moments: someone on campus recently recommended that I start reading “a blog by this guy, Dean Dad.” I nodded, and smiled noncommittally.
- This weekend, we are all cheeseheads. The Onion pretty much nailed it with its headline “Ben Roethlisberger One Win Away from Being a Good Person.” Go, Packers!
Thursday, February 03, 2011
Thoughts on "Academically Adrift"
Still marooned by snow -- seriously, guys, the bloom is off the rose -- I had the chance to devour Academically Adrift, by Richard Arum and Josipa Roksa. It’s a study of student performance on the Collegiate Learning Assessment exam, focusing particularly on demonstrated critical thinking skills. It’s the book that made headlines with its claim that most students don’t learn anything during their first two years of college. As someone who works at a two-year college, I considered the gauntlet thrown.
My first observation, which was largely ignored in the initial wave of reports, is that the sample they used only included four-year colleges. Community colleges were not included in the sample. Based on the rest of the findings, I doubt that we’ve cracked the secret code, but it’s certainly a glaring oversight for a study of the first two years of college.
That said, I’m pretty conflicted in my responses overall. It’s an impressive piece of analysis, certainly. The data work took some doing, and the prose is evenhanded and relatively clear by social science standards. (Social scientists aren’t known for our limpid prose, as a breed.) I’m just not sure how useful it is.
My takeaways:
- I wasn’t shocked to see that math, science, humanities, and social science majors tend to show the greatest gains in critical thinking, as compared to students in business, health, computer science, or communications. The authors were gracious enough not to go into too much detail there, and it would be politically unwise for me not to heed their example, but it was hard not to notice.
- Also not shocking: students who are assigned more reading and writing get better at reading and writing. In a related story, bears tend to crap in the woods.
- Group study was no more conducive to improved critical thinking skills than socializing. Individual study correlated strongly to improved skills; group study did not. This certainly fits my own personal preference and intuition, but it’s nice to see empirical confirmation.
- “Student engagement,” as measured by NSSE (and presumably CCSSE), correlates to retention, but not to increased learning. Fraternities and sororities lead to higher student satisfaction, but lower learning. Again, not shocking, but nice to see confirmed.
- Instructor expectations matter. Students who have more professors with high expectations learn more than students who don’t. Given that students will often go out of their way to seek out ‘easy’ professors and avoid ‘hard’ ones, this suggests a dilemma.
- Federal research funding dwarfs federal funding for improving instruction -- say, FIPSE -- by several orders of magnitude. Incentives matter.
- What’s good for retention may or may not be good for learning. Students can stick around for years without learning much, depending on what they’re doing. Arum and Roksa note that learning communities are positively correlated with retention in the national literature, for example, but there is no evidence one way or the other of their effects on learning.
- Reflecting on my time at Snooty Liberal Arts College, I could see why its students would do markedly well on tests of critical thinking. It had no ‘business’ or ‘communications’ majors; it had very selective admissions and therefore a strong ‘peer culture,’ and it lacked frats. My cc also lacks frats, but the other components don’t really carry over.
- ‘Peer culture’ is huge. If you run with a crowd of high achievers, you will adapt to it; if you run with a crowd of hard partiers, you will adapt to that. In an open-admissions institution, this presents a substantial challenge. (Some peer cultures are trickier than others. Coming from a public high school in a middle-class suburb, it took me a semester to raise my game when I got to SLAC. I didn’t know that the prep school kids affect insouciance in public while studying like crazy behind closed doors.)
- On-campus employment helps, if it’s up to ten hours a week. Off-campus employment hurts. Interestingly, grants help but loans hurt.
- Many students see college (and here the fact that it’s a sample of four-year colleges may matter) as primarily a social experience. It’s a chance to get away from Mom and Dad, to make new friends, to explore lifestyle options, and to get a credential. If that’s your orientation, then ‘learning’ is fine, as long as it doesn’t require time and effort. In that climate, lone instructors who raise academic expectations may pay a price in student anger.
At my cc and at most that I’ve seen, dorms don’t exist, and the whole “college experience” is pretty attenuated. (We don’t have climbing walls, a football team, fraternities, or even a quad.) If football Saturdays are your idea of college, you don’t come here. That said, though, it’s still very much the case that academics are often only one priority among many in students’ lives. As our student body gets progressively younger and more ‘traditional,’ some of the quirks of 18 year olds will probably become more relevant here.
- To their credit, Arum and Roksa note that making sustained and significant progress on student critical thinking skills would require fundamental realignments of incentives across the entire structure of higher ed. They seem a little too quick, in my estimation, to assume that “employers” want critical thinking skills -- at the entry level, in my observation, they’re much more focused on enthusiasm than on analytical prowess -- but that just makes matters worse.
And the incentives point is what’s ultimately so frustrating about the book. Yes, it would be lovely if students naturally clustered into the liberal arts, where virtuous and civic-minded professors larded their plates with ample helpings of robust reading and writing assignments. In the settings where that actually happens, measured learning outcomes are strong. But when you have open-door admissions and low per-student funding, getting there from here would require changes of staggering magnitude. Funding mechanisms would have to change; national markets would have to change; collective bargaining agreements would have to change; longtime readers can guess the rest...
Still, it’s a reminder of some of the right questions, and it sheds useful light in some corners. Maybe expanding the “individual quiet study” area in the library should take precedence over the “group study” section; I can do that. Maybe a little more skepticism towards “student support” offices, as against direct instruction, is in order; that may work.
Now if I could just get the voters to do something about that funding...
My first observation, which was largely ignored in the initial wave of reports, is that the sample they used only included four-year colleges. Community colleges were not included in the sample. Based on the rest of the findings, I doubt that we’ve cracked the secret code, but it’s certainly a glaring oversight for a study of the first two years of college.
That said, I’m pretty conflicted in my responses overall. It’s an impressive piece of analysis, certainly. The data work took some doing, and the prose is evenhanded and relatively clear by social science standards. (Social scientists aren’t known for our limpid prose, as a breed.) I’m just not sure how useful it is.
My takeaways:
- I wasn’t shocked to see that math, science, humanities, and social science majors tend to show the greatest gains in critical thinking, as compared to students in business, health, computer science, or communications. The authors were gracious enough not to go into too much detail there, and it would be politically unwise for me not to heed their example, but it was hard not to notice.
- Also not shocking: students who are assigned more reading and writing get better at reading and writing. In a related story, bears tend to crap in the woods.
- Group study was no more conducive to improved critical thinking skills than socializing. Individual study correlated strongly to improved skills; group study did not. This certainly fits my own personal preference and intuition, but it’s nice to see empirical confirmation.
- “Student engagement,” as measured by NSSE (and presumably CCSSE), correlates to retention, but not to increased learning. Fraternities and sororities lead to higher student satisfaction, but lower learning. Again, not shocking, but nice to see confirmed.
- Instructor expectations matter. Students who have more professors with high expectations learn more than students who don’t. Given that students will often go out of their way to seek out ‘easy’ professors and avoid ‘hard’ ones, this suggests a dilemma.
- Federal research funding dwarfs federal funding for improving instruction -- say, FIPSE -- by several orders of magnitude. Incentives matter.
- What’s good for retention may or may not be good for learning. Students can stick around for years without learning much, depending on what they’re doing. Arum and Roksa note that learning communities are positively correlated with retention in the national literature, for example, but there is no evidence one way or the other of their effects on learning.
- Reflecting on my time at Snooty Liberal Arts College, I could see why its students would do markedly well on tests of critical thinking. It had no ‘business’ or ‘communications’ majors; it had very selective admissions and therefore a strong ‘peer culture,’ and it lacked frats. My cc also lacks frats, but the other components don’t really carry over.
- ‘Peer culture’ is huge. If you run with a crowd of high achievers, you will adapt to it; if you run with a crowd of hard partiers, you will adapt to that. In an open-admissions institution, this presents a substantial challenge. (Some peer cultures are trickier than others. Coming from a public high school in a middle-class suburb, it took me a semester to raise my game when I got to SLAC. I didn’t know that the prep school kids affect insouciance in public while studying like crazy behind closed doors.)
- On-campus employment helps, if it’s up to ten hours a week. Off-campus employment hurts. Interestingly, grants help but loans hurt.
- Many students see college (and here the fact that it’s a sample of four-year colleges may matter) as primarily a social experience. It’s a chance to get away from Mom and Dad, to make new friends, to explore lifestyle options, and to get a credential. If that’s your orientation, then ‘learning’ is fine, as long as it doesn’t require time and effort. In that climate, lone instructors who raise academic expectations may pay a price in student anger.
At my cc and at most that I’ve seen, dorms don’t exist, and the whole “college experience” is pretty attenuated. (We don’t have climbing walls, a football team, fraternities, or even a quad.) If football Saturdays are your idea of college, you don’t come here. That said, though, it’s still very much the case that academics are often only one priority among many in students’ lives. As our student body gets progressively younger and more ‘traditional,’ some of the quirks of 18 year olds will probably become more relevant here.
- To their credit, Arum and Roksa note that making sustained and significant progress on student critical thinking skills would require fundamental realignments of incentives across the entire structure of higher ed. They seem a little too quick, in my estimation, to assume that “employers” want critical thinking skills -- at the entry level, in my observation, they’re much more focused on enthusiasm than on analytical prowess -- but that just makes matters worse.
And the incentives point is what’s ultimately so frustrating about the book. Yes, it would be lovely if students naturally clustered into the liberal arts, where virtuous and civic-minded professors larded their plates with ample helpings of robust reading and writing assignments. In the settings where that actually happens, measured learning outcomes are strong. But when you have open-door admissions and low per-student funding, getting there from here would require changes of staggering magnitude. Funding mechanisms would have to change; national markets would have to change; collective bargaining agreements would have to change; longtime readers can guess the rest...
Still, it’s a reminder of some of the right questions, and it sheds useful light in some corners. Maybe expanding the “individual quiet study” area in the library should take precedence over the “group study” section; I can do that. Maybe a little more skepticism towards “student support” offices, as against direct instruction, is in order; that may work.
Now if I could just get the voters to do something about that funding...
Wednesday, February 02, 2011
Hey Hey Hey!
We were all pretty much snowbound yesterday, yet again, so The Wife and I decided to use some streaming goodness to introduce the kids to Fat Albert.
As card-carrying Gen X’ers, we’re old enough to remember when there were only a few channels on tv, and you and all your friends watched most of the same things. We both watched Fat Albert as kids, though we didn’t know each other then. (Chances are that we saw some of the same episodes simultaneously, in our different states, not knowing it. The thought makes me smile.) Now that much of that stuff is available again, through the miracle of the interwebs, we can see it with adult eyes.
Fat Albert has aged pretty well, as these things go.
Yes, there was some visual shock. Bill Cosby was soooo young, though I don’t remember him seeming young at the time. The drawing style was distinctly 70’s -- it looks like a kid-friendly version of the cover of Miles Davis’ On the Corner album. (Back then, musicians recorded music onto vinyl discs that got sold...ah, never mind...) The music was distinctive, and far better than it had any right to be.
We had some gobsmacking moments of nostalgia, of course. The kid who ended every word with “B” cracked us both up (his attempt to say “abdicated” was worth a whole episode), and the theme song hits you right away. The silhouette shot of the group walking is unforgettable, and the junkyard-band bit was just as cool as I remembered. (And the characters! Mushmouth, Weird Harold, Bill and Russell, Albert, Rudy, Donald...each with his own walk. I don’t think Rudy’s walk is even physically possible.)
The acid test, though, was the reception from The Boy and The Girl. They had no nostalgic reason to watch it, so we were curious to see how they’d respond.
They enjoyed it, especially The Girl. They liked the slapstick and the characters, but I noticed that they really picked up on the sweetness of it.
I remembered each episode having a moral, but I’d forgotten just how careful Cosby was to make the dilemma both clear and basically safe. He’d actually interrupt the cartoon to make sure the kids watching didn’t lose the thread of the story. He presented himself as basically busy doing something else, like he just happened to be taking a moment to talk patiently to a kid. It’s a nice move, since it’s much closer to a kid’s real experience of a parent than a tv host who’s doing nothing but trying to entertain. The experience of it felt like having a calm, confident Dad walk you through a story in which some bad things happen, and some silly things, but you know everything will turn out fine. TG loved both episodes that we watched, and wanted a third.
The cartoons the kids watch now are very different. The Penguins of Madagascar is visually magnificent and endlessly clever, but it’s fast-paced and amoral. SpongeBob can be clever and funny, but it’s hardly about learning lessons. Cartoons now are frequently much more laden with adult humor and the requisite postmodern self-referentiality ; the best of them are witty and fun. But they aren’t sweet.
That’s okay, of course; no show has to do it all. Fast-and-clever can be fun. But it was striking to see a show that was so earnest, and ambling, and willing to repeat itself to make sure the kids could follow. They did; TB and TG both got the point, but with enough silliness that it didn’t seem preachy. They enjoyed the stories, TW and I enjoyed the music, and we all enjoyed the slapstick and the amazing styles of each character’s walk. I’d forgotten just how carefully Cosby crafted the show. Almost forty years later (!), it still works. I’d be surprised if the same is true of the Penguins of Madagascar.
Hey hey hey!
As card-carrying Gen X’ers, we’re old enough to remember when there were only a few channels on tv, and you and all your friends watched most of the same things. We both watched Fat Albert as kids, though we didn’t know each other then. (Chances are that we saw some of the same episodes simultaneously, in our different states, not knowing it. The thought makes me smile.) Now that much of that stuff is available again, through the miracle of the interwebs, we can see it with adult eyes.
Fat Albert has aged pretty well, as these things go.
Yes, there was some visual shock. Bill Cosby was soooo young, though I don’t remember him seeming young at the time. The drawing style was distinctly 70’s -- it looks like a kid-friendly version of the cover of Miles Davis’ On the Corner album. (Back then, musicians recorded music onto vinyl discs that got sold...ah, never mind...) The music was distinctive, and far better than it had any right to be.
We had some gobsmacking moments of nostalgia, of course. The kid who ended every word with “B” cracked us both up (his attempt to say “abdicated” was worth a whole episode), and the theme song hits you right away. The silhouette shot of the group walking is unforgettable, and the junkyard-band bit was just as cool as I remembered. (And the characters! Mushmouth, Weird Harold, Bill and Russell, Albert, Rudy, Donald...each with his own walk. I don’t think Rudy’s walk is even physically possible.)
The acid test, though, was the reception from The Boy and The Girl. They had no nostalgic reason to watch it, so we were curious to see how they’d respond.
They enjoyed it, especially The Girl. They liked the slapstick and the characters, but I noticed that they really picked up on the sweetness of it.
I remembered each episode having a moral, but I’d forgotten just how careful Cosby was to make the dilemma both clear and basically safe. He’d actually interrupt the cartoon to make sure the kids watching didn’t lose the thread of the story. He presented himself as basically busy doing something else, like he just happened to be taking a moment to talk patiently to a kid. It’s a nice move, since it’s much closer to a kid’s real experience of a parent than a tv host who’s doing nothing but trying to entertain. The experience of it felt like having a calm, confident Dad walk you through a story in which some bad things happen, and some silly things, but you know everything will turn out fine. TG loved both episodes that we watched, and wanted a third.
The cartoons the kids watch now are very different. The Penguins of Madagascar is visually magnificent and endlessly clever, but it’s fast-paced and amoral. SpongeBob can be clever and funny, but it’s hardly about learning lessons. Cartoons now are frequently much more laden with adult humor and the requisite postmodern self-referentiality ; the best of them are witty and fun. But they aren’t sweet.
That’s okay, of course; no show has to do it all. Fast-and-clever can be fun. But it was striking to see a show that was so earnest, and ambling, and willing to repeat itself to make sure the kids could follow. They did; TB and TG both got the point, but with enough silliness that it didn’t seem preachy. They enjoyed the stories, TW and I enjoyed the music, and we all enjoyed the slapstick and the amazing styles of each character’s walk. I’d forgotten just how carefully Cosby crafted the show. Almost forty years later (!), it still works. I’d be surprised if the same is true of the Penguins of Madagascar.
Hey hey hey!
Tuesday, February 01, 2011
Meritocracy and Hiring
Is academic hiring meritocratic? The author of this piece assumes that it is. As someone whose job it is to actually hire faculty, I can attest that merit is only a small part of the picture.
The single most important part of the picture is the existence of a position at all. In this funding climate, we can only afford to staff a few of the positions (whether faculty, staff, or administration) that we need. If the position doesn’t exist, then the relative merit of the prospective candidates means exactly zero.
That may seem obvious, but it gets blithely ignored in the piece. Posted tenure-track faculty positions were down by double digits in most disciplines last year. Does that mean the merit of the candidate pool went down by double digits? Um, no.
In a particularly cruel catch-22, the relative ease of finding adjuncts for a given discipline actually mitigates against its getting a line. If you can only afford to hire one full-timer, and you have requests from both history and, say, pharmacy, what do you do? If good history adjuncts are easy to find, and good pharmacy adjuncts are nearly impossible, you give the line to pharmacy. An oversupply of candidates in a given discipline can actually depress demand for those candidates. (Say’s Law in reverse: supply actually depresses demand.) The connection to individual ‘merit’ is obscure at best.
For public institutions -- which employ a significant percentage of faculty in the US -- political winds at the state (and sometimes county) level also have serious impacts on hiring. For example, my college just got word that next year will bring yet another seven-figure cut in our operating funds. Obviously, any serious programmatic expansion is out of the question. This has literally nothing to do with the ‘merit’ of any given candidate. Depending on the state, the political winds may make the economic ones even worse. Combine a recession with a Taxpayer’s Bill of Rights or a Prop 13, and all bets are off.
Of course, there’s also the basic incompatibility of life tenure with the idea of meritocracy. If incumbents don’t have to keep proving themselves against newcomers, then you do not have a meritocracy. Tenure violates the foundational assumption of meritocracy. In truly performance-driven settings, there’s no such thing as resting on your laurels; you are either the best at your role right now or you are not, and you’d better be ready to prove yourself at any moment. If we had a meritocratic revolution, tenure would be the first casualty.
But even taking all of that as given, are the searches that actually happen reflections of pure merit?
They couldn’t be, because there is no such thing. Instead, there’s something like ‘fit,’ which only makes sense in context. Situational merit -- or what I above called “best at your role” -- necessarily relies on the situation (or role). As the situation changes, so does the merit.
Quick, who has more situational merit: a well-published candidate with an indifferent teaching style, or an engaging teacher who rarely publishes? A research university would answer that differently than a community college would. Unless you assume a single linear chain of being, like the old social Darwinists, you have to confront the diversity of missions of various institutions. ‘Merit’ in one setting does not necessarily imply merit in the other.
Alternately, who has more merit: a professor of French or a professor of Spanish? The latter has a much better shot at getting hired, because that’s where student demand is. You may be a fantastic French teacher, but if we don’t need it, we don’t need it. Enrollments aren’t the only drivers of hiring, but they matter. I don’t know how to judge the ‘merit’ of one language against another, but I know quite well how to measure the enrollments of one against another. In the absence of sustainable public subsidies, tuitions will pay the bills. C’est la vie.
Even within the same department or program, needs will vary over time. Sometimes a department needs a peacemaker and sometimes it needs a sparkplug. Sometimes it needs to diversify its demographics by race or gender. Sometimes it’s too inbred, with everybody coming from the same one or two graduate programs, and it needs new perspectives. Sometimes it just needs someone who isn’t allergic to the internet. None of those has anything to do with ‘merit’ in the sense the term is usually used, but each makes sense in its own way.
The key is to recognize that hiring is always more about the employer than about the employee. Employers hire to solve problems they consider important. If you’re the best darn German professor who ever walked the planet, congratulations, but I don’t need you. I don’t doubt your brilliance, your hard work, your civic virtue, or your habit of helping old ladies across the street. They just don’t matter. It’s not about you.
Conversely, if you landed in a great job, congratulations! Enjoy it, work hard, and do it without guilt. But it would be ethically unbecoming to assume that it reflects your personal superiority to those who didn’t make it. There’s such a thing as being in the right place at the right time.
My objection to the ‘meritocracy’ piece isn’t just that it’s inaccurate, although it is, or that it’s arrogant, although it is. My objection is that it feeds a myth that does real harm.
If you believe that the academic job market is a true meritocracy, and you’ve been freeway flying for a while now, what does that say about you?
I’m convinced that one reason some people won’t let themselves let go of the dream, despite years of external signals suggesting that they should, is a sense that it would reflect a personal moral failing. They’ve identified so completely with the ‘meritocracy’ myth that they feel a real need to redeem themselves within it. It’s more than the money; other fields often pay more. Instead, they see the status of “tenured professor” as a sort of validation of everything they’ve done. Leaving the academy would be admitting defeat and accepting failure; lifelong “A” students, as a breed, aren’t very good at that. It’s not what they do.
My proposal: let’s recognize the academic job market as the uneven, unpredictable, often unforgiving thing that it is. Good people lose. Frankly, some real losers sometimes win. It’s not entirely random, of course, but it’s a far cry from a meritocracy. Let’s stop recruiting for a meat grinder of a market and pretending that it will all work out in the end. And for heaven’s sake, let’s stop pretending that it’s all about the candidates. It just isn’t.
The single most important part of the picture is the existence of a position at all. In this funding climate, we can only afford to staff a few of the positions (whether faculty, staff, or administration) that we need. If the position doesn’t exist, then the relative merit of the prospective candidates means exactly zero.
That may seem obvious, but it gets blithely ignored in the piece. Posted tenure-track faculty positions were down by double digits in most disciplines last year. Does that mean the merit of the candidate pool went down by double digits? Um, no.
In a particularly cruel catch-22, the relative ease of finding adjuncts for a given discipline actually mitigates against its getting a line. If you can only afford to hire one full-timer, and you have requests from both history and, say, pharmacy, what do you do? If good history adjuncts are easy to find, and good pharmacy adjuncts are nearly impossible, you give the line to pharmacy. An oversupply of candidates in a given discipline can actually depress demand for those candidates. (Say’s Law in reverse: supply actually depresses demand.) The connection to individual ‘merit’ is obscure at best.
For public institutions -- which employ a significant percentage of faculty in the US -- political winds at the state (and sometimes county) level also have serious impacts on hiring. For example, my college just got word that next year will bring yet another seven-figure cut in our operating funds. Obviously, any serious programmatic expansion is out of the question. This has literally nothing to do with the ‘merit’ of any given candidate. Depending on the state, the political winds may make the economic ones even worse. Combine a recession with a Taxpayer’s Bill of Rights or a Prop 13, and all bets are off.
Of course, there’s also the basic incompatibility of life tenure with the idea of meritocracy. If incumbents don’t have to keep proving themselves against newcomers, then you do not have a meritocracy. Tenure violates the foundational assumption of meritocracy. In truly performance-driven settings, there’s no such thing as resting on your laurels; you are either the best at your role right now or you are not, and you’d better be ready to prove yourself at any moment. If we had a meritocratic revolution, tenure would be the first casualty.
But even taking all of that as given, are the searches that actually happen reflections of pure merit?
They couldn’t be, because there is no such thing. Instead, there’s something like ‘fit,’ which only makes sense in context. Situational merit -- or what I above called “best at your role” -- necessarily relies on the situation (or role). As the situation changes, so does the merit.
Quick, who has more situational merit: a well-published candidate with an indifferent teaching style, or an engaging teacher who rarely publishes? A research university would answer that differently than a community college would. Unless you assume a single linear chain of being, like the old social Darwinists, you have to confront the diversity of missions of various institutions. ‘Merit’ in one setting does not necessarily imply merit in the other.
Alternately, who has more merit: a professor of French or a professor of Spanish? The latter has a much better shot at getting hired, because that’s where student demand is. You may be a fantastic French teacher, but if we don’t need it, we don’t need it. Enrollments aren’t the only drivers of hiring, but they matter. I don’t know how to judge the ‘merit’ of one language against another, but I know quite well how to measure the enrollments of one against another. In the absence of sustainable public subsidies, tuitions will pay the bills. C’est la vie.
Even within the same department or program, needs will vary over time. Sometimes a department needs a peacemaker and sometimes it needs a sparkplug. Sometimes it needs to diversify its demographics by race or gender. Sometimes it’s too inbred, with everybody coming from the same one or two graduate programs, and it needs new perspectives. Sometimes it just needs someone who isn’t allergic to the internet. None of those has anything to do with ‘merit’ in the sense the term is usually used, but each makes sense in its own way.
The key is to recognize that hiring is always more about the employer than about the employee. Employers hire to solve problems they consider important. If you’re the best darn German professor who ever walked the planet, congratulations, but I don’t need you. I don’t doubt your brilliance, your hard work, your civic virtue, or your habit of helping old ladies across the street. They just don’t matter. It’s not about you.
Conversely, if you landed in a great job, congratulations! Enjoy it, work hard, and do it without guilt. But it would be ethically unbecoming to assume that it reflects your personal superiority to those who didn’t make it. There’s such a thing as being in the right place at the right time.
My objection to the ‘meritocracy’ piece isn’t just that it’s inaccurate, although it is, or that it’s arrogant, although it is. My objection is that it feeds a myth that does real harm.
If you believe that the academic job market is a true meritocracy, and you’ve been freeway flying for a while now, what does that say about you?
I’m convinced that one reason some people won’t let themselves let go of the dream, despite years of external signals suggesting that they should, is a sense that it would reflect a personal moral failing. They’ve identified so completely with the ‘meritocracy’ myth that they feel a real need to redeem themselves within it. It’s more than the money; other fields often pay more. Instead, they see the status of “tenured professor” as a sort of validation of everything they’ve done. Leaving the academy would be admitting defeat and accepting failure; lifelong “A” students, as a breed, aren’t very good at that. It’s not what they do.
My proposal: let’s recognize the academic job market as the uneven, unpredictable, often unforgiving thing that it is. Good people lose. Frankly, some real losers sometimes win. It’s not entirely random, of course, but it’s a far cry from a meritocracy. Let’s stop recruiting for a meat grinder of a market and pretending that it will all work out in the end. And for heaven’s sake, let’s stop pretending that it’s all about the candidates. It just isn’t.
Subscribe to:
Posts (Atom)