They don’t teach this stuff in dean school.
I hear a rumor that a professor has moved out of his office, and intends to take a job this Fall thousands of miles away. Nobody knows the institution, though, only the geographic region. He has family there.
I’ve received no communication at all from this professor. Neither has his chair. Neither has human resources.
I wander by his office, and notice that not only is it empty -- this from someone who has some pronounced “packrat” tendencies -- but that his nameplate is gone.
Hmm.
I email him, asking whether I should report a larceny, or if there’s something I should know. No response.
Keep in mind that this guy has tenure, and is technically entitled to his full slate of classes in September unless he resigns in writing. Since he didn’t sign up to teach summer classes, the fact that he hadn’t been around for the last month or so didn’t raise eyebrows. I’m supposed to assume that he’ll be back, rested and ready, in September.
It’s entirely possible, of course, that he’ll be back, ready to go. Maybe he’s having a midlife crisis, throwing all his worldly possessions in a box, seeing America from a convertible with a woman who’s too young for him, and snapping out of it by Labor Day. Maybe he’s the victim of a really clever prank. Maybe he has already headed out for wherever he’s going, and just couldn’’t be bothered to tell anyone. Maybe he’s trying really hard to find something there, but hasn’t yet, and wants the safety net. Maybe the black helicopters took him. Maybe he knew too much about the iPhone, and Steve Jobs had him, um, rebooted.
I have no way of knowing.
If he doesn’t return, of course, we’ll have to scramble to cover his classes, and I’ll get the “the administration should have known” stuff. But if we get his classes covered and he comes back, we’ll have some cranky substitutes on our hands. People don’t like prepping for classes that get taken away at the last minute.
Emergencies happen, I get that. I’ve had professors die mid-semester. But the fact that he cleaned out the office suggests some level of forethought.
Wise and worldly readers, you make the call. What’s the best administrative response to this?
In which a veteran of cultural studies seminars in the 1990's moves into academic administration and finds himself a married suburban father of two. Foucault, plus lawn care. For private comments, I can be reached at deandad at gmail dot com. The opinions expressed here are my own and not those of my employer.
Wednesday, June 30, 2010
Tuesday, June 29, 2010
When Narratives Collide
As regular readers know, I’m a colossal nerd. One of my nerdier habits involves listening to Marketplace podcasts on a daily basis, and even the weekly wrap-up show on weekends. (Livin’ la vida loca!) Over the past few days, I’ve heard variations on these two themes, and I’m having a hard time believing them simultaneously:
1. New college grads are having a terrible time finding jobs. To the extent that many can find jobs, they’re often in positions that don’t really require college degrees.
2. America is falling behind other countries in educational attainment, and the only hope for lasting prosperity is to increase dramatically the educational level of our young people.
I find each of these plausible separately, but I’m having an awful time believing them together. We need more chronically underemployed people who can’t pay back their student loans? It’s not obvious why that would be true.
It may be a short term/long term issue. In the short term we don’t really have enough jobs to go around, but over the long term we’re likelier to prosper with an educated workforce than an uneducated one. Well, okay, but if that’s true, shouldn’t we somehow help tide over the current grads until things turn around?
If the argument for massively increased higher education isn’t individual gain but instead improved odds of larger social gain, then we’re implicitly treating higher education as a public good. If it’s truly a public good, it should be funded accordingly.
In the popular press, point 1 is usually painted as an individual failing, and point 2 is usually treated as structural. But they’re really both structural. If good jobs go begging but new grads would rather smoke pot in Mom’s basement, then okay, that’s individual failing. But that seems to be the exception. More often, new grads are hitting their heads against walls looking for serious employment, and having a terrible time finding it.
Wise and worldly readers, I need your guidance. Is there a reasonable and coherent way to believe both 1 and 2 at the same time?
1. New college grads are having a terrible time finding jobs. To the extent that many can find jobs, they’re often in positions that don’t really require college degrees.
2. America is falling behind other countries in educational attainment, and the only hope for lasting prosperity is to increase dramatically the educational level of our young people.
I find each of these plausible separately, but I’m having an awful time believing them together. We need more chronically underemployed people who can’t pay back their student loans? It’s not obvious why that would be true.
It may be a short term/long term issue. In the short term we don’t really have enough jobs to go around, but over the long term we’re likelier to prosper with an educated workforce than an uneducated one. Well, okay, but if that’s true, shouldn’t we somehow help tide over the current grads until things turn around?
If the argument for massively increased higher education isn’t individual gain but instead improved odds of larger social gain, then we’re implicitly treating higher education as a public good. If it’s truly a public good, it should be funded accordingly.
In the popular press, point 1 is usually painted as an individual failing, and point 2 is usually treated as structural. But they’re really both structural. If good jobs go begging but new grads would rather smoke pot in Mom’s basement, then okay, that’s individual failing. But that seems to be the exception. More often, new grads are hitting their heads against walls looking for serious employment, and having a terrible time finding it.
Wise and worldly readers, I need your guidance. Is there a reasonable and coherent way to believe both 1 and 2 at the same time?
Monday, June 28, 2010
Partnerships
Although they’re invisible to many faculty, we administrators spend an increasing amount of our time on partnerships with various community agencies, philanthropies, consortia, employment boards, and other ad hoc collaborative groups.
That’s driven by several factors. First, of course, is the basic fact that many social or economic issues require multiple fronts of attack. Improving the employability of the local workforce requires higher education, but not only that; it also requires childcare, social services, and active input by prospective employers in the area.
Second is a history of agencies tripping over each other’s feet. A student who can’t get to class because her daycare fell through shows up as attrition at the college, even if the root of the problem isn’t at the college. In my first year on faculty at Proprietary U, I was struck at the lack of synchronization between the class schedule and the bus schedule; students actually asked to leave class fifteen minutes early to catch a bus, since the next one wouldn’t arrive for several hours. There’s no winning that one.
Third is funding. Government agencies, nonprofits, and philanthropists are getting savvier about the full-court press, so they’re increasingly writing interagency collaboration into the grant requirements. Colleges will go where the money is, so if the money is in collaboration, that’s where we’ll go.
Finally, at some level, there’s a general sense that collaboration is a Good Thing. Which, in a general sense, is true.
But the transaction costs are amazing.
You don’t really appreciate how difficult collaboration is until you contrast it to running your own stuff. Every collaboration needs a “go-to person” at each site, sometimes grant-funded, sometimes not. Every collaboration has its own calendar, which is usually an amalgam of the various partners’ calendars and the preferences of the funding agency. Every partnership has its own ‘benchmarks,’ its own reporting protocols and requirements, its own sunset provisions, its own local ‘matching’ requirements, its own acronyms -- what is it about granting agencies and acronyms? -- and its own assumptions about how the constituent institutions actually run. Those assumptions are frequently, and maddeningly, wrong, but it’s considered bad form to say so.
For example, I’ve had to walk away from grants because they required us to scramble jobs in ways that violated collective bargaining agreements; because they’ve assumed the presence of ‘matching’ resources that simply don’t exist; and because absolutely nobody on campus was willing to be the “go-to person” (also called the “champion”). When you’ve already cut departments to the absolute essentials -- a perfectly predictable consequence of a simultaneous funding crunch and enrollment boom -- then you don’t have the spare resources to devote person-hours to ancillary activity. You don’t have the money it takes to apply for money.
With each new go-to person, we either have a new administrator on campus, or a new pair of adjuncts to carry the classes from which a professor is released. (I sometimes see bloggers asking why we don’t see adjunct administrators. We do. Any time a full-time professor gets a course release for administrative work, that’s adjunct administration. Don’t be distracted by the presence of a middleman.) When we have to hire a new person on ‘soft money’ -- the term of art for grant money with expiration dates -- we still have to do the full search process, with all of the time and money that entails.
All of which is fine, when the projects are good. But they’re much more time-consuming than they get credit for, and far more complicated to manage. They’re consuming ever more administrative time, and the trend line is upward.
Honestly, in many cases, it would be a hell of a lot more efficient just to fund the institutions sufficiently in the first place.
That’s driven by several factors. First, of course, is the basic fact that many social or economic issues require multiple fronts of attack. Improving the employability of the local workforce requires higher education, but not only that; it also requires childcare, social services, and active input by prospective employers in the area.
Second is a history of agencies tripping over each other’s feet. A student who can’t get to class because her daycare fell through shows up as attrition at the college, even if the root of the problem isn’t at the college. In my first year on faculty at Proprietary U, I was struck at the lack of synchronization between the class schedule and the bus schedule; students actually asked to leave class fifteen minutes early to catch a bus, since the next one wouldn’t arrive for several hours. There’s no winning that one.
Third is funding. Government agencies, nonprofits, and philanthropists are getting savvier about the full-court press, so they’re increasingly writing interagency collaboration into the grant requirements. Colleges will go where the money is, so if the money is in collaboration, that’s where we’ll go.
Finally, at some level, there’s a general sense that collaboration is a Good Thing. Which, in a general sense, is true.
But the transaction costs are amazing.
You don’t really appreciate how difficult collaboration is until you contrast it to running your own stuff. Every collaboration needs a “go-to person” at each site, sometimes grant-funded, sometimes not. Every collaboration has its own calendar, which is usually an amalgam of the various partners’ calendars and the preferences of the funding agency. Every partnership has its own ‘benchmarks,’ its own reporting protocols and requirements, its own sunset provisions, its own local ‘matching’ requirements, its own acronyms -- what is it about granting agencies and acronyms? -- and its own assumptions about how the constituent institutions actually run. Those assumptions are frequently, and maddeningly, wrong, but it’s considered bad form to say so.
For example, I’ve had to walk away from grants because they required us to scramble jobs in ways that violated collective bargaining agreements; because they’ve assumed the presence of ‘matching’ resources that simply don’t exist; and because absolutely nobody on campus was willing to be the “go-to person” (also called the “champion”). When you’ve already cut departments to the absolute essentials -- a perfectly predictable consequence of a simultaneous funding crunch and enrollment boom -- then you don’t have the spare resources to devote person-hours to ancillary activity. You don’t have the money it takes to apply for money.
With each new go-to person, we either have a new administrator on campus, or a new pair of adjuncts to carry the classes from which a professor is released. (I sometimes see bloggers asking why we don’t see adjunct administrators. We do. Any time a full-time professor gets a course release for administrative work, that’s adjunct administration. Don’t be distracted by the presence of a middleman.) When we have to hire a new person on ‘soft money’ -- the term of art for grant money with expiration dates -- we still have to do the full search process, with all of the time and money that entails.
All of which is fine, when the projects are good. But they’re much more time-consuming than they get credit for, and far more complicated to manage. They’re consuming ever more administrative time, and the trend line is upward.
Honestly, in many cases, it would be a hell of a lot more efficient just to fund the institutions sufficiently in the first place.
Friday, June 25, 2010
Memory and Sequence
I’ve received a couple of wonderful messages lately from readers, each touching on the theme of memory. Taken together, they’re pretty provocative. First (from an email):
and second:
It’s true, and it’s a bear to address.
I recall a student I tried to advise at Proprietary U. He was several semesters into his program, and he was choosing classes for the following semester. I mentioned that course x was next in the sequence, and required for his program; he objected that it covered a software package he didn’t know. I responded that the software package was covered in the class he was currently finishing. His response, which haunts me to this day: “but that was over a month ago!” His tone suggested that I was being completely outlandish; he was just mannerly enough not to end with “duh!”
Some of that is just a cost of doing business. Memory can play weird tricks. (For example, I remember vividly a history professor in college mentioning that England experienced five eclipses in 1678. That factoid has yet to prove useful, but decades later, it’s still in there. I don’t even know if it’s true!) But it’s also true that thoughtful course sequencing -- which presupposes both thoughtful curricular design and steady academic advisement -- can provide reinforcement of key skills.
It also suggests the limits of passing judgment on previous instructors based on current performance. It’s hard enough to judge teaching success in the moment, but judging it later requires differentiating between ‘never got it’ and ‘got it but lost it.’
Some classes focus more on skills than on specific content, so the style of memory involved is different. But there, too, it’s largely about repetition and effort.
Wise and worldly readers, have you found an elegant way to address leaky student memories?
Here's my incredibly important take-away that isn't really noted for
most people in higher education: kids forget things. Kids forget
what they did last week, kids forget what they had for breakfast, and,
most importantly, kids forget almost everything about academics that
they've ever encountered. They forget because, and this is pretty
cool, we know from actual psychological research, that you forget
stuff that you don't use regularly.
Why don't folks in higher ed notice this as much? My answer is
basically, we're not usually seeing the same students again in a class
that requires them to remember material that we've previously taught
them. I had a lovely student, 3.9 GPA because she got a B freshman
year in one of my senior-level advanced mathematics courses this past
semester. She had previously taken a junior-level course with me,
exactly one year before. I tried to make reference to things she
supposedly knew from that course. She had no idea what I was talking
about... (She's the diligent and talented sort rather than the
maniacal using your typology--still my fav DD post). I can tell loads
of similar stories from every level of teaching that I've done.
and second:
One of the things we (meaning me and my colleagues who teach calculus and trig) talk about regularly is the fact that we all know that certain students knew skill "X" when they passed the previous class - including my own - and forget it within a month. We have to do our best to ensure that we each know that such regular occurrences are not the fault of the instructor, since we can't evaluate what happens a month later, and yet work on ways to reduce how often those situations occur. When I have control, like when students from my own Physics 1 class don't remember to draw a free-body diagram in Physics 2, I make it clear that the failure is completely unacceptable.
It’s true, and it’s a bear to address.
I recall a student I tried to advise at Proprietary U. He was several semesters into his program, and he was choosing classes for the following semester. I mentioned that course x was next in the sequence, and required for his program; he objected that it covered a software package he didn’t know. I responded that the software package was covered in the class he was currently finishing. His response, which haunts me to this day: “but that was over a month ago!” His tone suggested that I was being completely outlandish; he was just mannerly enough not to end with “duh!”
Some of that is just a cost of doing business. Memory can play weird tricks. (For example, I remember vividly a history professor in college mentioning that England experienced five eclipses in 1678. That factoid has yet to prove useful, but decades later, it’s still in there. I don’t even know if it’s true!) But it’s also true that thoughtful course sequencing -- which presupposes both thoughtful curricular design and steady academic advisement -- can provide reinforcement of key skills.
It also suggests the limits of passing judgment on previous instructors based on current performance. It’s hard enough to judge teaching success in the moment, but judging it later requires differentiating between ‘never got it’ and ‘got it but lost it.’
Some classes focus more on skills than on specific content, so the style of memory involved is different. But there, too, it’s largely about repetition and effort.
Wise and worldly readers, have you found an elegant way to address leaky student memories?
Thursday, June 24, 2010
Making Yourself Dispensable
Last week I heard an interview with Seth Godin in which he mentioned the need for employees to make themselves indispensable.
In the context of academic administration, I have to disagree. In fact, in many ways, making yourself dispensable means you’re doing your job well.
As an administrator, I’m working with smart, creative, extremely independent people -- both faculty and staff -- most of whom have very pronounced ideas as to how things should be done. They have a full range of personalities, including the flaws, and varying vantage points on the college. Most are generally well-intended, as they see it, even when their unspoken ideas of The Good conflict with each other’s.
If I try to centralize all wisdom in myself, I’m wasting valuable resources. If I try to wheel-and-deal my way to importance, I’m outnumbered.
The contribution I can make from my office -- and for the record, there is one -- is in setting the processes, background conditions, and climate in which people can do their best work without getting embroiled in unproductive conflict or drama. When this works, it looks like I’m not doing much of anything at all. As with editing, doing it well usually means going unnoticed. But take it away or get it wrong, and you see the difference immediately.
I’ve seen administrators try to make themselves indispensable by hoarding information or by constructing elaborate networks of side deals in which they fancy themselves key nodes. It never ends well. Moving people around like chess pieces creates an illusion of control, but then the chess pieces start moving on their own and the entire scheme crashes. Worse, someone eventually catches wind of some little side deal you were hoping to keep quiet, takes offense, calls in a third party, and makes your life hell. Not worth it.
The business literature largely exalts the larger-than-life, the outlier, or the ‘purple cow.’ There’s some truth to that, but it’s easy to misread. Much of what these jobs require is something closer to a willingness to experience success vicariously. I consider it a victory when we manage to establish a routine protocol for some recurrent event. That’s a huge win because it allows us to redirect energy away from something banal and towards something progressive. Establishing routine systems -- that is, distinguishing between offices and officeholders -- is the boring-but-important work that allows the organization to devote resources to doing its best work. In these roles, paradoxically enough, you make yourself valuable by making yourself dispensable.
In the context of academic administration, I have to disagree. In fact, in many ways, making yourself dispensable means you’re doing your job well.
As an administrator, I’m working with smart, creative, extremely independent people -- both faculty and staff -- most of whom have very pronounced ideas as to how things should be done. They have a full range of personalities, including the flaws, and varying vantage points on the college. Most are generally well-intended, as they see it, even when their unspoken ideas of The Good conflict with each other’s.
If I try to centralize all wisdom in myself, I’m wasting valuable resources. If I try to wheel-and-deal my way to importance, I’m outnumbered.
The contribution I can make from my office -- and for the record, there is one -- is in setting the processes, background conditions, and climate in which people can do their best work without getting embroiled in unproductive conflict or drama. When this works, it looks like I’m not doing much of anything at all. As with editing, doing it well usually means going unnoticed. But take it away or get it wrong, and you see the difference immediately.
I’ve seen administrators try to make themselves indispensable by hoarding information or by constructing elaborate networks of side deals in which they fancy themselves key nodes. It never ends well. Moving people around like chess pieces creates an illusion of control, but then the chess pieces start moving on their own and the entire scheme crashes. Worse, someone eventually catches wind of some little side deal you were hoping to keep quiet, takes offense, calls in a third party, and makes your life hell. Not worth it.
The business literature largely exalts the larger-than-life, the outlier, or the ‘purple cow.’ There’s some truth to that, but it’s easy to misread. Much of what these jobs require is something closer to a willingness to experience success vicariously. I consider it a victory when we manage to establish a routine protocol for some recurrent event. That’s a huge win because it allows us to redirect energy away from something banal and towards something progressive. Establishing routine systems -- that is, distinguishing between offices and officeholders -- is the boring-but-important work that allows the organization to devote resources to doing its best work. In these roles, paradoxically enough, you make yourself valuable by making yourself dispensable.
Tuesday, June 22, 2010
The Wrong Metaphor
My town is dealing with the same economic pressures as most -- declining state aid, declining tax revenues -- so it’s facing some unpleasant budgetary choices. (The culprit behind declining state aid is mostly Medicaid. Until we get a handle on that, we’re in trouble. But that’s another post.)
Recently a few members of the city council proposed exacting some nasty cuts on the public school budget. Word got out, and I and a few hundred other people attended an astonishingly long meeting to discuss the plan. After a few obligatory pleasantries, the meeting went to the ‘public comment’ section, in which members of the public at large got to address the council (and the audience). Several dozen people spoke, myself included, and most followed what amounted to a script:
I have lived here for x years. I have x number of kids in the schools. I am shocked and appalled that the council would consider selling out the children. Children are the future. etc.
Listening to the speakers, I realized why it all seemed so familiar. It played like a particularly bad all-faculty meeting! It had the ritualistic indignation, the demagoguery, the direct and very affronted personal accusations, the recitations of litanies, the occasional moonbat, and a coercive level of groupthink. And I say that actually having agreed with the position the audience took!
Watching the council members up on stage, I realized that they face pretty much the exact same thing academic administrators face.
Most of the management literature assumes a for-profit setting, in which managers have the power to decide who they want on the bus. In tenured academia, though, that’s not the case. You inherit people, and you can’t get rid of them, no matter how toxic they might be. The partisans of tenure -- you know who you are -- rarely, if ever, address what that means for administration; they typically just assume (without actually saying) that something like a self-governing anarcho-syndicalist commune would be ideal, preferably with some distant external agency underwriting it. That is, until someone is mean to them.
But in public higher ed, something like ‘local politician’ comes closer to the truth. You have to maintain your poise while being viciously attacked by people who aren’t accountable for what they say. Instead of focusing on making the right decisions, you focus largely on process. (In this setting, even the right decision can be wrong simply because you made it.) You have to maintain good working relationships with people who get on your nerves, and even with people who go out of their way to defeat you just for the sheer hell of it. There’s a constant tension between high purpose and nagging detail.
The metaphor matters because the skills of a good local politician are different from the skills of a corporate manager. The shoot-from-the-hip autocratic style can work in a single-purpose setting, but it’s a train wreck waiting to happen in a setting in which cross-purposes are normal and you can’t just fire people. What looks like ‘insubordination’ in one setting is considered ‘a healthy exchange’ in another. And the ability to not take it personally is unevenly distributed.
In this case, the good guys won. The schools were spared the nasty cuts, and the town found other ways of coping. A painfully long, very healthy exchange led to a reasonable outcome. Everybody left intact. For all the barbed language, nobody was so estranged as to prevent future collaboration. It wasn’t pretty, but it worked. There’s a lesson in there somewhere...
Recently a few members of the city council proposed exacting some nasty cuts on the public school budget. Word got out, and I and a few hundred other people attended an astonishingly long meeting to discuss the plan. After a few obligatory pleasantries, the meeting went to the ‘public comment’ section, in which members of the public at large got to address the council (and the audience). Several dozen people spoke, myself included, and most followed what amounted to a script:
I have lived here for x years. I have x number of kids in the schools. I am shocked and appalled that the council would consider selling out the children. Children are the future. etc.
Listening to the speakers, I realized why it all seemed so familiar. It played like a particularly bad all-faculty meeting! It had the ritualistic indignation, the demagoguery, the direct and very affronted personal accusations, the recitations of litanies, the occasional moonbat, and a coercive level of groupthink. And I say that actually having agreed with the position the audience took!
Watching the council members up on stage, I realized that they face pretty much the exact same thing academic administrators face.
Most of the management literature assumes a for-profit setting, in which managers have the power to decide who they want on the bus. In tenured academia, though, that’s not the case. You inherit people, and you can’t get rid of them, no matter how toxic they might be. The partisans of tenure -- you know who you are -- rarely, if ever, address what that means for administration; they typically just assume (without actually saying) that something like a self-governing anarcho-syndicalist commune would be ideal, preferably with some distant external agency underwriting it. That is, until someone is mean to them.
But in public higher ed, something like ‘local politician’ comes closer to the truth. You have to maintain your poise while being viciously attacked by people who aren’t accountable for what they say. Instead of focusing on making the right decisions, you focus largely on process. (In this setting, even the right decision can be wrong simply because you made it.) You have to maintain good working relationships with people who get on your nerves, and even with people who go out of their way to defeat you just for the sheer hell of it. There’s a constant tension between high purpose and nagging detail.
The metaphor matters because the skills of a good local politician are different from the skills of a corporate manager. The shoot-from-the-hip autocratic style can work in a single-purpose setting, but it’s a train wreck waiting to happen in a setting in which cross-purposes are normal and you can’t just fire people. What looks like ‘insubordination’ in one setting is considered ‘a healthy exchange’ in another. And the ability to not take it personally is unevenly distributed.
In this case, the good guys won. The schools were spared the nasty cuts, and the town found other ways of coping. A painfully long, very healthy exchange led to a reasonable outcome. Everybody left intact. For all the barbed language, nobody was so estranged as to prevent future collaboration. It wasn’t pretty, but it worked. There’s a lesson in there somewhere...
Pass Rates
In a brief conversation with a professor on campus recently, I was reminded of a basic assumption gap. I made a reference to pass rates -- the percentage of students who achieve passing grades in any given semester -- and the efforts we’re making there. My assumption was that pass rates are scandalously low, and that we need to improve them. He concurred that there was a problem with pass rates, but defined the problem differently. To him, the issue was that our pass rates are much too high. We need to get tougher, he argued, so we wouldn’t have so many weak performers in second-year classes.
Hmm.
I remember holding that perspective in my first few years on faculty. Looking at student papers that ranged from ‘meh’ to ‘so bad that they warped the very fabric of space-time by the sheer force of their suckitude,’ I sometimes wondered about the folks who had allowed them to pass this far. I recalled a conversation with a muckety-muck at Proprietary U who mused aloud about how to increase graduation rates: I had responded with a curt “stop advertising on the Jerry Springer show and get us better students.” So I couldn’t deny knowing what he meant.
Properly understood, though, I don’t think these two perspectives are necessarily in conflict.
If you understand student capabilities as given and fixed, then the two perspectives have to clash. If students will simply do what they do, then the only question is how high or low to set the bar. In that model, there’s a direct and inverse relationship between academic rigor and pass rates. The only way to raise pass rates in that model is to lower standards. If you buy this view of things, then lower-tier schools are doomed to an eternal struggle between the soulless bean-counting bureaucrats who would favor a fog-the-mirror test for graduation, and the heroic but tragic figures on the faculty who are consigned to a rearguard battle on behalf of excellence and truth and yadda yadda yadda.
In other words, it’s crap.
If you assume, though, that pass rates reflect more than just immutable underlying traits of given students, then the ‘tragic conflict’ model becomes less convincing.
In the aggregate, I think the latter assumption is so clearly stronger that there almost isn’t an argument. But in a given semester, in a given class, it’s true that students tend to sort themselves pretty quickly. The improvements that mean something often take more than a single intervention or a single semester; by the time a given professor gets the newly-improved student, s/he can take that improvement as part of that student’s ‘given’ talents. Which, for all short-term practical purposes, is true.
For a college to try to improve its pass rates by lowering its standards will ultimately be self-defeating. Students rise, or fall, to meet expectations, and a devalued degree will be treated accordingly. The way to raise pass rates in an open-door institution is to arrange everything possible to help students help themselves, and to hold them to high standards. That’s beyond the purview of any one class, and properly so; that’s why taking talk of ‘pass rates’ personally can lead to some unproductive conclusions. It was just a little jarring to hear my older self spoken back to me so directly.
Hmm.
I remember holding that perspective in my first few years on faculty. Looking at student papers that ranged from ‘meh’ to ‘so bad that they warped the very fabric of space-time by the sheer force of their suckitude,’ I sometimes wondered about the folks who had allowed them to pass this far. I recalled a conversation with a muckety-muck at Proprietary U who mused aloud about how to increase graduation rates: I had responded with a curt “stop advertising on the Jerry Springer show and get us better students.” So I couldn’t deny knowing what he meant.
Properly understood, though, I don’t think these two perspectives are necessarily in conflict.
If you understand student capabilities as given and fixed, then the two perspectives have to clash. If students will simply do what they do, then the only question is how high or low to set the bar. In that model, there’s a direct and inverse relationship between academic rigor and pass rates. The only way to raise pass rates in that model is to lower standards. If you buy this view of things, then lower-tier schools are doomed to an eternal struggle between the soulless bean-counting bureaucrats who would favor a fog-the-mirror test for graduation, and the heroic but tragic figures on the faculty who are consigned to a rearguard battle on behalf of excellence and truth and yadda yadda yadda.
In other words, it’s crap.
If you assume, though, that pass rates reflect more than just immutable underlying traits of given students, then the ‘tragic conflict’ model becomes less convincing.
In the aggregate, I think the latter assumption is so clearly stronger that there almost isn’t an argument. But in a given semester, in a given class, it’s true that students tend to sort themselves pretty quickly. The improvements that mean something often take more than a single intervention or a single semester; by the time a given professor gets the newly-improved student, s/he can take that improvement as part of that student’s ‘given’ talents. Which, for all short-term practical purposes, is true.
For a college to try to improve its pass rates by lowering its standards will ultimately be self-defeating. Students rise, or fall, to meet expectations, and a devalued degree will be treated accordingly. The way to raise pass rates in an open-door institution is to arrange everything possible to help students help themselves, and to hold them to high standards. That’s beyond the purview of any one class, and properly so; that’s why taking talk of ‘pass rates’ personally can lead to some unproductive conclusions. It was just a little jarring to hear my older self spoken back to me so directly.
Monday, June 21, 2010
You’re Not Helping!
Last week Congress held some hearings on accreditation requirements and the definition of the credit hour. In reaction to the increasing percentage of federal financial aid that’s going to students at for-profits, and the somewhat generous interpretations some for-profits have had of the ‘credit hour,’ Democrats decided to mount a spirited defense of the seat-time based credit hour.
Note to Congressional Democrats: you’re not helping! The credit hour is the wrong hill to die on. If anything, it’s a major part of the problem. And I say that as both a bona fide lefty-liberal and a community college administrator.
Honestly, this seems to be one of those cases in which people seem to take sides depending on who their allies are, rather than the logic of the position itself. Democrats usually have strong backing in traditional higher ed, so they’re taking a position they presume will be popular there. Republicans have no use for higher ed generally, and have an ideological weakness for privatization, so they’re siding with the for-profits. But the logic of the positions is backwards.
The crisis of public higher education isn’t the for-profits. It’s unsustainability. The for-profits are just a symptom of that. And the credit hour is part of the unsustainability. If you want to save public higher ed, both for its voters and for the actual good that it does, you need to let it find a sustainable way of operating. This isn’t it.
Generally, a credit hour has been defined as fifty or fifty-five minutes of class per week for about 15 weeks, plus the presumed out-of-class work that goes with it. Most classes carry either three or four credits. For the sake of simplicity, the minutes of class are usually referred to as “seat time.”
In the best of times, it’s a slippery definition. Although it measures time-in-a-room, it doesn’t really measure time-on-task. And the ‘out-of-class’ component is essentially taken on faith. In my years on faculty, nobody ever told me that I was supposed to calibrate the out-of-class time to a specific ratio, and I didn’t. Based on the varying amounts of work I did for classes as a student, I’d be hard-pressed to say that they hewed to any consistent rule.
Even if we agreed to look past those, though, two major problems have arisen that are eating away at the foundation of the entire conceit. One is online education, and the other is cost.
Part of the definition of online education is that it renders the concept of “seat time” unintelligible. Some people read (and write) more quickly than others, and the presentation is usually asynchronous. “Go at your own pace” is part of the appeal. For courses that were originally developed in a traditional format and have simply migrated online, credit-hour designations usually reflect how they started. (If in-class Psych 101 was three credits, then online Psych 101 is also three credits.) But as we start developing new courses online, and even entire courses of study, that expedient isn’t always available.
As weak as the original ‘anchor’ for determinations of credit hours was, its absence seems to allow for a college to say that just about anything can count for just about anything. This is the loophole some for-profits exploited, inflating the credit hours granted for instruction in order to maximize the amount of financial aid revenue they’d realize. For reasons I won’t pretend to understand, the Higher Learning Commission of the North Central Association gave its blessing to the institution that inflated its hours. Now some members of Congress are pushing for a federally mandated definition of a credit hour, to take it out of the hands of promiscuous accreditors and to prevent the squandering of taxpayer money on bogus courses.
It’s as if online education never happened. But it has.
Worse, though, a time-based definition of productivity -- a three credit class requires forty-five hours of seat time, say -- writes “zero productivity gain” into law. To understand this, you have to understand that ‘more productivity’ does not mean ‘more work.’ It means ‘more accomplished in the same amount of time.’ If you get more work done by just working longer, you aren’t increasing productivity; you’re just doing more of the same. Failure to understand this has led to all manner of misplaced polemic.
If you declare that no matter what you do, you can’t award credit unless you’ve consumed the same amount of time as last year and the year before that, then you’ve guaranteed zero productivity gain. Worse, you’ve actually penalized any attempt to improve productivity.
When the productivity of the economy as a whole is increasing a few percent a year, and higher ed isn’t, then all else being equal, we should expect the cost of higher education to outpace inflation by a few percent a year. Which is exactly what it has done for the last few decades. Add some cost-shifting from the public sector to tuition, some unfunded mandates, and the occasional boondoggle, and you’ve got a really impressive cost spiral.
If you want to save public higher ed, you have to break the cost spiral. That means you have to break the credit hour. Writing it into law is precisely wrong.
If you want to break the growth of the for-profits, don’t do it by chaining everyone to dead weight. Improve the publics, and make us more appealing as alternatives; students will vote with their feet. Move money away from student-based aid and into institution-based aid; reverse the cost-shifting trend. That will strengthen the publics and make for-profit skimming much harder. But for the sake of all that is holy and good, don’t mandate that we must never move beyond the productivity level we had in 1950. That’s not helping.
Note to Congressional Democrats: you’re not helping! The credit hour is the wrong hill to die on. If anything, it’s a major part of the problem. And I say that as both a bona fide lefty-liberal and a community college administrator.
Honestly, this seems to be one of those cases in which people seem to take sides depending on who their allies are, rather than the logic of the position itself. Democrats usually have strong backing in traditional higher ed, so they’re taking a position they presume will be popular there. Republicans have no use for higher ed generally, and have an ideological weakness for privatization, so they’re siding with the for-profits. But the logic of the positions is backwards.
The crisis of public higher education isn’t the for-profits. It’s unsustainability. The for-profits are just a symptom of that. And the credit hour is part of the unsustainability. If you want to save public higher ed, both for its voters and for the actual good that it does, you need to let it find a sustainable way of operating. This isn’t it.
Generally, a credit hour has been defined as fifty or fifty-five minutes of class per week for about 15 weeks, plus the presumed out-of-class work that goes with it. Most classes carry either three or four credits. For the sake of simplicity, the minutes of class are usually referred to as “seat time.”
In the best of times, it’s a slippery definition. Although it measures time-in-a-room, it doesn’t really measure time-on-task. And the ‘out-of-class’ component is essentially taken on faith. In my years on faculty, nobody ever told me that I was supposed to calibrate the out-of-class time to a specific ratio, and I didn’t. Based on the varying amounts of work I did for classes as a student, I’d be hard-pressed to say that they hewed to any consistent rule.
Even if we agreed to look past those, though, two major problems have arisen that are eating away at the foundation of the entire conceit. One is online education, and the other is cost.
Part of the definition of online education is that it renders the concept of “seat time” unintelligible. Some people read (and write) more quickly than others, and the presentation is usually asynchronous. “Go at your own pace” is part of the appeal. For courses that were originally developed in a traditional format and have simply migrated online, credit-hour designations usually reflect how they started. (If in-class Psych 101 was three credits, then online Psych 101 is also three credits.) But as we start developing new courses online, and even entire courses of study, that expedient isn’t always available.
As weak as the original ‘anchor’ for determinations of credit hours was, its absence seems to allow for a college to say that just about anything can count for just about anything. This is the loophole some for-profits exploited, inflating the credit hours granted for instruction in order to maximize the amount of financial aid revenue they’d realize. For reasons I won’t pretend to understand, the Higher Learning Commission of the North Central Association gave its blessing to the institution that inflated its hours. Now some members of Congress are pushing for a federally mandated definition of a credit hour, to take it out of the hands of promiscuous accreditors and to prevent the squandering of taxpayer money on bogus courses.
It’s as if online education never happened. But it has.
Worse, though, a time-based definition of productivity -- a three credit class requires forty-five hours of seat time, say -- writes “zero productivity gain” into law. To understand this, you have to understand that ‘more productivity’ does not mean ‘more work.’ It means ‘more accomplished in the same amount of time.’ If you get more work done by just working longer, you aren’t increasing productivity; you’re just doing more of the same. Failure to understand this has led to all manner of misplaced polemic.
If you declare that no matter what you do, you can’t award credit unless you’ve consumed the same amount of time as last year and the year before that, then you’ve guaranteed zero productivity gain. Worse, you’ve actually penalized any attempt to improve productivity.
When the productivity of the economy as a whole is increasing a few percent a year, and higher ed isn’t, then all else being equal, we should expect the cost of higher education to outpace inflation by a few percent a year. Which is exactly what it has done for the last few decades. Add some cost-shifting from the public sector to tuition, some unfunded mandates, and the occasional boondoggle, and you’ve got a really impressive cost spiral.
If you want to save public higher ed, you have to break the cost spiral. That means you have to break the credit hour. Writing it into law is precisely wrong.
If you want to break the growth of the for-profits, don’t do it by chaining everyone to dead weight. Improve the publics, and make us more appealing as alternatives; students will vote with their feet. Move money away from student-based aid and into institution-based aid; reverse the cost-shifting trend. That will strengthen the publics and make for-profit skimming much harder. But for the sake of all that is holy and good, don’t mandate that we must never move beyond the productivity level we had in 1950. That’s not helping.
Friday, June 18, 2010
Admitting Defeat
I need the wisdom of my wise and worldly readers on this one.
It falls somewhere between a political question and an etiquette question.
Let’s say, for the sake of argument, that your campus has identified a few key goals, and that there’s pretty good campuswide agreement on those goals. Let’s say that those key goals have been given a consistently high profile. And let’s say that several different projects have run over the past couple of years in pursuit of those goals.
And just to make life interesting, let’s say that one of those projects just isn’t working. Despite the concerted hard work of many smart and well-intentioned people, it just hasn’t succeeded. And you’re personally identified in many people’s minds with both the goals and the project.
Let’s say that you still think the goals are valid, but you think it’s time to change tactics.
How, exactly, do you communicate that without destroying your own credibility, and/or the credibility of the goals themselves? How do you admit defeat without throwing anybody under the bus, or unduly feeding the cynicism of those who live for that sort of thing?
Put differently, how do you communicate the concept of ‘experiment’ in a way that doesn’t register as flippant or evasive?
It falls somewhere between a political question and an etiquette question.
Let’s say, for the sake of argument, that your campus has identified a few key goals, and that there’s pretty good campuswide agreement on those goals. Let’s say that those key goals have been given a consistently high profile. And let’s say that several different projects have run over the past couple of years in pursuit of those goals.
And just to make life interesting, let’s say that one of those projects just isn’t working. Despite the concerted hard work of many smart and well-intentioned people, it just hasn’t succeeded. And you’re personally identified in many people’s minds with both the goals and the project.
Let’s say that you still think the goals are valid, but you think it’s time to change tactics.
How, exactly, do you communicate that without destroying your own credibility, and/or the credibility of the goals themselves? How do you admit defeat without throwing anybody under the bus, or unduly feeding the cynicism of those who live for that sort of thing?
Put differently, how do you communicate the concept of ‘experiment’ in a way that doesn’t register as flippant or evasive?
Thursday, June 17, 2010
If I Could Bottle That...
A few weeks ago, we went to a local photographer to get some family portraits done. Last night we went back to see the proofs. On the way back home:
TW: I hope they can do something about my tooth. I hate the way it looks.
DD: I didn’t notice it.
TW: Well, I did.
DD: Maybe I was distracted by my Incredible Growing Forehead. It’s a fivehead.
TW: It’s not so bad.
DD: I look like an alien.
TW: TB, what did you think?
TB: All I could see was my scar.
DD: I didn’t see your scar!
TW: It wasn’t even noticeable!
TB: It was huge.
TW: We’re our own worst critics. TG, what did you see?
TG (cheerfully): Nothing!
And she’s right.
TW: I hope they can do something about my tooth. I hate the way it looks.
DD: I didn’t notice it.
TW: Well, I did.
DD: Maybe I was distracted by my Incredible Growing Forehead. It’s a fivehead.
TW: It’s not so bad.
DD: I look like an alien.
TW: TB, what did you think?
TB: All I could see was my scar.
DD: I didn’t see your scar!
TW: It wasn’t even noticeable!
TB: It was huge.
TW: We’re our own worst critics. TG, what did you see?
TG (cheerfully): Nothing!
And she’s right.
Wednesday, June 16, 2010
Disloyalty?
According to this story from IHE, a provost lost her job after her President found out that she had applied for a position elsewhere. Apparently the President was offended that she was willing to consider working someplace else, and told her that if she interviewed there, he would replace her. She interviewed, she didn’t get that job, and she lost the job she had.
Granting that this President overreacted badly, the incident still highlights a seldom-noted, but very real, issue for administrators looking for new jobs. There isn’t really a generally accepted etiquette for how to do that, or for how the current home institution should react.
Although the search for the first faculty job out of grad school sucks in many ways, one silver lining is that nobody blames you for looking. Grad school is supposed to end, and you’re supposed to go on a job search from there. It’s understood. You don’t have to worry about your references blabbing; if anything, you wouldn’t mind a little help.
Once you get that first non-temporary faculty job, though, things get messier. (I’m not referring to one-year or adjunct gigs, since most people understand leaving those.)
Some employers actually reward external searching through counteroffers. At my cc, the collective bargaining agreement renders that strategy moot. I couldn’t make a counteroffer if I wanted to.
Other employers get huffy and offended, and actually punish attempts to leave. Sometimes that happens overtly, as in this case; more often, it happens through a passive-aggressive whispering campaign in which your lame-duck status renders you ineffective for your remaining time there. In many administrative jobs, getting things done requires agreements in which others believe that you will follow through on your word. If they suspect that you won’t be around to follow through, they’re less likely to accept your word, and there goes your ability to do your job. Once you’re believed to be irrelevant, in some ways, you are. It’s self-fulfilling.
If you’re an administrator without a tenured position in your back pocket -- which is pretty standard in the cc world -- then trying to move between institutions can be risky. You have to try to gauge the local receptiveness to news of searches, and you have to decide who to ask to serve as references. (One tip from the employer’s side of the desk: never, ever, ever, under any circumstances, list someone as a reference without telling them first. Once, in following up on a candidate’s application, I called a listed reference and asked about candidate x. The listed referent* said “who?”) Typically you’re supposed to have one person above you in the hierarchy, one peer, and one direct report, and it’s best to have some gender diversity amongst them. If you’re in a low-trust setting, just assembling a good set of references can be a challenge.
Once you get to the finalist stage, even relative secrecy may not be an option. Particularly for more senior positions, it’s not unusual for the hiring institution (or its agent) to call anyone and everyone at the candidate’s home institution to try to get a fuller picture. (For Presidential searches, they’ll even announce the names of the finalists in the local newspaper.) If you’re trying to escape a low-trust institution and you have a near-miss, you could wind up in a situation like the one in the article.
I’ll admit to having been lucky with my searches. When it came time to disclose to my bosses that I was a finalist at wherever, they’ve been classy about it. I’ve made a point of taking the same approach with others. At PU, I had a brilliant professor who absolutely hated the fact that he worked at PU, and who couldn’t stop broadcasting the fact. He was pretty open about looking elsewhere, but thus far hadn’t succeeded. I made him a deal; I’d do everything in my power to help him on his search, including giving the glowingest truthful reference I could, if he would just stifle the drama and do his job in the meantime. He took the deal, and it worked in its first year, which I’m convinced was the best outcome for all involved. He got out, which he desperately wanted to do, and landed someplace where he was far happier. PU got a good year out of him. Afterwards, PU got to hire someone who actually wanted to be there. Everybody won, and being pissy about it wouldn’t have helped anybody.
In terms of both morals and harm reduction, I’m increasingly convinced that the right move is simply to support people in their paths, even if those paths lead elsewhere. If you find that the people you’ve hired become a sort of farm team for higher-level positions elsewhere, take it as a compliment; it shows that you have good taste. But for the poor candidate trying to escape a vengeful, low-trust setting, sometimes the high road is blocked. I winced at this story, because there but for the grace of God...
*referee? referrer? referent? source? I’m not sure what the most elegant term is here.
Granting that this President overreacted badly, the incident still highlights a seldom-noted, but very real, issue for administrators looking for new jobs. There isn’t really a generally accepted etiquette for how to do that, or for how the current home institution should react.
Although the search for the first faculty job out of grad school sucks in many ways, one silver lining is that nobody blames you for looking. Grad school is supposed to end, and you’re supposed to go on a job search from there. It’s understood. You don’t have to worry about your references blabbing; if anything, you wouldn’t mind a little help.
Once you get that first non-temporary faculty job, though, things get messier. (I’m not referring to one-year or adjunct gigs, since most people understand leaving those.)
Some employers actually reward external searching through counteroffers. At my cc, the collective bargaining agreement renders that strategy moot. I couldn’t make a counteroffer if I wanted to.
Other employers get huffy and offended, and actually punish attempts to leave. Sometimes that happens overtly, as in this case; more often, it happens through a passive-aggressive whispering campaign in which your lame-duck status renders you ineffective for your remaining time there. In many administrative jobs, getting things done requires agreements in which others believe that you will follow through on your word. If they suspect that you won’t be around to follow through, they’re less likely to accept your word, and there goes your ability to do your job. Once you’re believed to be irrelevant, in some ways, you are. It’s self-fulfilling.
If you’re an administrator without a tenured position in your back pocket -- which is pretty standard in the cc world -- then trying to move between institutions can be risky. You have to try to gauge the local receptiveness to news of searches, and you have to decide who to ask to serve as references. (One tip from the employer’s side of the desk: never, ever, ever, under any circumstances, list someone as a reference without telling them first. Once, in following up on a candidate’s application, I called a listed reference and asked about candidate x. The listed referent* said “who?”) Typically you’re supposed to have one person above you in the hierarchy, one peer, and one direct report, and it’s best to have some gender diversity amongst them. If you’re in a low-trust setting, just assembling a good set of references can be a challenge.
Once you get to the finalist stage, even relative secrecy may not be an option. Particularly for more senior positions, it’s not unusual for the hiring institution (or its agent) to call anyone and everyone at the candidate’s home institution to try to get a fuller picture. (For Presidential searches, they’ll even announce the names of the finalists in the local newspaper.) If you’re trying to escape a low-trust institution and you have a near-miss, you could wind up in a situation like the one in the article.
I’ll admit to having been lucky with my searches. When it came time to disclose to my bosses that I was a finalist at wherever, they’ve been classy about it. I’ve made a point of taking the same approach with others. At PU, I had a brilliant professor who absolutely hated the fact that he worked at PU, and who couldn’t stop broadcasting the fact. He was pretty open about looking elsewhere, but thus far hadn’t succeeded. I made him a deal; I’d do everything in my power to help him on his search, including giving the glowingest truthful reference I could, if he would just stifle the drama and do his job in the meantime. He took the deal, and it worked in its first year, which I’m convinced was the best outcome for all involved. He got out, which he desperately wanted to do, and landed someplace where he was far happier. PU got a good year out of him. Afterwards, PU got to hire someone who actually wanted to be there. Everybody won, and being pissy about it wouldn’t have helped anybody.
In terms of both morals and harm reduction, I’m increasingly convinced that the right move is simply to support people in their paths, even if those paths lead elsewhere. If you find that the people you’ve hired become a sort of farm team for higher-level positions elsewhere, take it as a compliment; it shows that you have good taste. But for the poor candidate trying to escape a vengeful, low-trust setting, sometimes the high road is blocked. I winced at this story, because there but for the grace of God...
*referee? referrer? referent? source? I’m not sure what the most elegant term is here.
Tuesday, June 15, 2010
The Year-Later Evaluation
Several alert readers sent me this piece from the Washington Post. It glosses a study conducted at the Air Force Academy that finds that
The piece goes on to suggest that student evaluations are not to be trusted, because they reward entertainment, attractiveness, and/or easy grading. Instead, administrators should shuck their slavish dependence on shallow popularity measures and defer to faculty rank.
The argument collapses upon close examination, of course. If student evaluations had the implied effect, then these tougher and more effective teachers would never have had the opportunity to gain more experience, let alone get promoted to higher rank. The stronger performance of the senior group actually suggests that administrators are doing a pretty good job of separating the wheat from the chaff at promotion time (and of taking student evaluations with the requisite grains of salt). But never mind that.
The worthwhile element of the story is the prospect of basing professors’ evaluations on student performance in subsequent courses.
The appeal is obvious. If Prof. Smith’s English 101 students routinely crash and burn in English 102, while Prof. Jones’ English 101 students do great in 102, then I feel pretty confident saying that Prof. Jones is doing a better job than Prof. Smith. If your method of getting great results isn’t my own, but it works, the fact that it works is the important thing.
But it isn’t as easy as that. Start with ‘routinely.’ ‘Routinely’ is an ambiguous term. It could take years to get raw numbers large enough for any statistically significant measure.
Course sequences aren’t always as clean as that, either. 101 to 102 is easy. But what about electives? What about the history professor who can’t help but notice that some students’ papers are consistently better than others? What about the smallish programs in which the same few professors teach the entire sequence, so they could, if so inclined, skew the sample?
What about the professor who actually takes some risks?
In a relatively high-attrition environment, how do you count the dropout? Do you control for demographics? Given that students in, say, developmental reading will normally do much worse than students in honors philosophy, what are you actually measuring? And if you go with course-based norming, aren’t you essentially pitting faculty in the same department against each other?
I bring these up not to discount the possibility or the appeal of outcome-based evaluation, but to suggest that it just isn’t anywhere as easy as all that. (I haven’t even mentioned the implications for tenure if a senior, tenured professor turns out to be doing a horrible job.) Drive-by analyses like this project make for cute headlines, but actually defeat understanding. Iif you’re actually serious about presenting an alternative, you need to take a much closer look at the implications of your alternative across the board. You’ll also need some serious grains of salt. Go ahead and take some of mine.
Professors rated highly by their students tended to yield better results for students in their own classes, but the same students did worse in subsequent classes. The implication: highly rated professors actually taught students less, on average, than less popular profs.
Meanwhile, professors with higher academic rank, teaching experience and educational experience -- what you might call "input measures" for performance -- showed the reverse trend. Their students tended to do worse in that professor's course, but better in subsequent courses. Presumably, they were learning more.
The piece goes on to suggest that student evaluations are not to be trusted, because they reward entertainment, attractiveness, and/or easy grading. Instead, administrators should shuck their slavish dependence on shallow popularity measures and defer to faculty rank.
The argument collapses upon close examination, of course. If student evaluations had the implied effect, then these tougher and more effective teachers would never have had the opportunity to gain more experience, let alone get promoted to higher rank. The stronger performance of the senior group actually suggests that administrators are doing a pretty good job of separating the wheat from the chaff at promotion time (and of taking student evaluations with the requisite grains of salt). But never mind that.
The worthwhile element of the story is the prospect of basing professors’ evaluations on student performance in subsequent courses.
The appeal is obvious. If Prof. Smith’s English 101 students routinely crash and burn in English 102, while Prof. Jones’ English 101 students do great in 102, then I feel pretty confident saying that Prof. Jones is doing a better job than Prof. Smith. If your method of getting great results isn’t my own, but it works, the fact that it works is the important thing.
But it isn’t as easy as that. Start with ‘routinely.’ ‘Routinely’ is an ambiguous term. It could take years to get raw numbers large enough for any statistically significant measure.
Course sequences aren’t always as clean as that, either. 101 to 102 is easy. But what about electives? What about the history professor who can’t help but notice that some students’ papers are consistently better than others? What about the smallish programs in which the same few professors teach the entire sequence, so they could, if so inclined, skew the sample?
What about the professor who actually takes some risks?
In a relatively high-attrition environment, how do you count the dropout? Do you control for demographics? Given that students in, say, developmental reading will normally do much worse than students in honors philosophy, what are you actually measuring? And if you go with course-based norming, aren’t you essentially pitting faculty in the same department against each other?
I bring these up not to discount the possibility or the appeal of outcome-based evaluation, but to suggest that it just isn’t anywhere as easy as all that. (I haven’t even mentioned the implications for tenure if a senior, tenured professor turns out to be doing a horrible job.) Drive-by analyses like this project make for cute headlines, but actually defeat understanding. Iif you’re actually serious about presenting an alternative, you need to take a much closer look at the implications of your alternative across the board. You’ll also need some serious grains of salt. Go ahead and take some of mine.
Monday, June 14, 2010
Speed and Sleep
Does your college's administration have a clue, or is it a living, breathing refutation of both Darwin and Intelligent Design?
Take this simple test and find out!
1. Does your college allow new students to enroll, fresh off the street, after classes have started? (I'm not referring to drop/add. I'm referring to entirely new enrollments; the kid wasn't a student yesterday, but he is today.)
a. Yes.
b. No.
Scoring: If you answered 'a,' your administration is utterly clueless. If you answered 'b,' there's hope.
At Proprietary U, of course, the answer was 'a,' with a vengeance. The faculty protested, the lower-level administrators (hi!) protested, but to no avail. The argument was always "how will we hit our numbers if we turn people away?" Unsurprisingly, the last students in were the first students out; their attrition rate was astronomical. Which makes sense, if you give it a moment's thought. A new student right off the street has to have several things in place to succeed: finances (and financial aid), childcare (if applicable), transportation, work hours, textbooks, and mindset, for starters. If you're still cobbling that together, and you drop yourself right into classes already in progress, the odds of getting overwhelmed and either dropping out or flunking out are staggering.
Of course, high attrition this semester means that, to hit your numbers, you have to keep the door wide open again next semester. It became a self-reinforcing cycle of failure.
Colleges that summon up the fortitude to shut down very late registration usually do so with great trepidation, fearing an enrollment cataclysm. Instead, they usually find the near-term impact minimal, and the medium and long-term impact positive. (In the context of the recession-driven enrollment boom, I'd change "minimal" to "zero.") That's because enrollment is a function of enrollment plus retention; if the latter goes up, it can make up for losses in the former. And students who get their ducks in a row before starting classes are much likelier to come back for more.
I think of it as the difference between speed and sleep. In the very short term, speed may help combat fatigue, but you'll pay for it with interest. Sleep, on the other hand, actually solves the problem.
The astute reader will notice here that "retention" and "high standards" are complementary. There's a mathematically sound economic argument for keeping the bar reasonably high. Too many people within academe -- both faculty and administration -- simply assume that any discussion of retention is code for watering-down. In fact, forcing students to get their stuff together before they start can be a winner all around.
Some particularly cynical sorts at PU used to try to float a humanitarian argument for very late admissions, arguing that these students have difficult lives and need to be cut some slack. I couldn't get past the idea that if we knew they were doomed, we were really just taking their money. A true humanitarian gesture would involve sitting down with the late-arriving student and helping him understand what he would need to do to put himself in a position to succeed; if that meant getting his life drama under control first, so be it. Even if some of those students don't return, the improved completion rates of the ones that do will more than pay for the difference. Four semesters for one student is better than three one-semester wonders, on both budgetary and humanitarian grounds.
And that doesn't even count the logistical madness on campus. The last-minute students consume a disproportionate share of staff time in helping them construct schedules (since nearly everything is already full); package financial aid (since it's a rush job); navigate the various paperwork requirements (immunizations, etc.); and get textbooks (which may be sold out). When these tasks get rushed, errors multiply, and these students don't have much margin for error on a good day. Even a resilient student faces an uphill battle at the last minute.
I'd love to hear from those wise and worldly readers who've lived through a regime change, going from 'a' to 'b' or vice versa. How did the change play out on your campus?
Take this simple test and find out!
1. Does your college allow new students to enroll, fresh off the street, after classes have started? (I'm not referring to drop/add. I'm referring to entirely new enrollments; the kid wasn't a student yesterday, but he is today.)
a. Yes.
b. No.
Scoring: If you answered 'a,' your administration is utterly clueless. If you answered 'b,' there's hope.
At Proprietary U, of course, the answer was 'a,' with a vengeance. The faculty protested, the lower-level administrators (hi!) protested, but to no avail. The argument was always "how will we hit our numbers if we turn people away?" Unsurprisingly, the last students in were the first students out; their attrition rate was astronomical. Which makes sense, if you give it a moment's thought. A new student right off the street has to have several things in place to succeed: finances (and financial aid), childcare (if applicable), transportation, work hours, textbooks, and mindset, for starters. If you're still cobbling that together, and you drop yourself right into classes already in progress, the odds of getting overwhelmed and either dropping out or flunking out are staggering.
Of course, high attrition this semester means that, to hit your numbers, you have to keep the door wide open again next semester. It became a self-reinforcing cycle of failure.
Colleges that summon up the fortitude to shut down very late registration usually do so with great trepidation, fearing an enrollment cataclysm. Instead, they usually find the near-term impact minimal, and the medium and long-term impact positive. (In the context of the recession-driven enrollment boom, I'd change "minimal" to "zero.") That's because enrollment is a function of enrollment plus retention; if the latter goes up, it can make up for losses in the former. And students who get their ducks in a row before starting classes are much likelier to come back for more.
I think of it as the difference between speed and sleep. In the very short term, speed may help combat fatigue, but you'll pay for it with interest. Sleep, on the other hand, actually solves the problem.
The astute reader will notice here that "retention" and "high standards" are complementary. There's a mathematically sound economic argument for keeping the bar reasonably high. Too many people within academe -- both faculty and administration -- simply assume that any discussion of retention is code for watering-down. In fact, forcing students to get their stuff together before they start can be a winner all around.
Some particularly cynical sorts at PU used to try to float a humanitarian argument for very late admissions, arguing that these students have difficult lives and need to be cut some slack. I couldn't get past the idea that if we knew they were doomed, we were really just taking their money. A true humanitarian gesture would involve sitting down with the late-arriving student and helping him understand what he would need to do to put himself in a position to succeed; if that meant getting his life drama under control first, so be it. Even if some of those students don't return, the improved completion rates of the ones that do will more than pay for the difference. Four semesters for one student is better than three one-semester wonders, on both budgetary and humanitarian grounds.
And that doesn't even count the logistical madness on campus. The last-minute students consume a disproportionate share of staff time in helping them construct schedules (since nearly everything is already full); package financial aid (since it's a rush job); navigate the various paperwork requirements (immunizations, etc.); and get textbooks (which may be sold out). When these tasks get rushed, errors multiply, and these students don't have much margin for error on a good day. Even a resilient student faces an uphill battle at the last minute.
I'd love to hear from those wise and worldly readers who've lived through a regime change, going from 'a' to 'b' or vice versa. How did the change play out on your campus?
Friday, June 11, 2010
Pop!
Is there a higher education bubble?
I've read several commentaries recently asserting that we're in one. The bill of particulars usually includes some mix of the following:
- Enrollments in colleges and universities are the highest they've ever been.
- Tuition is the highest it has ever been, even after inflation, and it's increasing much faster than inflation.
- In the Great Recession, many new graduates are simply unable to pay back their impressively high student loans.
- Private lenders rushed into the student loan market a few years ago, covering the gap between what the government would lend and what colleges charged. As with the housing market, you can't multiply leverage forever.
- The payoff to college degrees declines as degrees become more commonplace. Some people respond to that by going for ever-higher levels of degrees, in a sort of credentialist arms race.
Rebuttals usually include some of the following:
- Recessions don't last forever. A temporary blip does not a long-term change make.
- The net payoff, in salary terms, for a college degree is still strongly positive.
- Some of the tuition increases are the result of public disinvestment (or "cost-shifting"), rather than out-of-control spending.
- In an information-based economy, some increased demand for education is rational.
There's merit in both sets of arguments, but they strike me as answering the wrong question. For what it's worth, I'd suggest that the error is in treating "higher education" as if it's one thing. It's actually a series of different things, some sustainable and some not.
At a really basic level, I'd divide degree-granting colleges into four groups: high-cost high-prestige, high-cost low-prestige, low-cost high-prestige, and low-cost low-prestige. The high-cost high-prestige places -- think Harvard and Yale -- will be fine. They have more money than God, and they sell exclusivity.
Low-cost high-prestige places -- the public Ivies, say -- will be fine if they can keep up their perceived quality. There's always a market for a good deal.
Low-cost low-prestige places -- community colleges leap to mind -- will be fine if they can shift the ground of conversation from prestige to outcomes. A community college that does a good job at the first two years of a degree (or a two-year occupational degree) is a great deal; students who graduate from locally-respected programs in Nursing or criminal justice can find good jobs (in normal times) at minimal cost. And if the cc does general education well, it can become the first half of a low-cost high-prestige program. For a kid who's basically talented but still unfocused, doing the first two years at a cc before transferring to someplace good can make a world of sense, and can greatly reduce student loan burdens. (Conversely, a community college that does a lousy job at the first two years has no compelling reason to exist.)
But then there's that pesky high-cost low-prestige sector. Not to put too fine a point on it, but these places are in very deep trouble. This is where the 'bubble' argument has real merit.
As a parent, I can see the argument for spending 50 grand a year to go to Princeton. I don't see the argument to spend 50 grand a year on St. Nobody College, or on Proprietary U.
The rising number of 18-year-olds, years of cheap credit, and then the Great Recession, combined to temporarily mask some of the issues in that sector. But as the recession recedes and the number of high school grads starts to drop again, these colleges will be exposed.
In the long run, this is probably a good thing. The model of high-cost low-return doesn't make a hell of a lot of sense, and this sector will be less able to weather storms than others. The tuition-driven non-profits don't have much cushion against bad times, and watering down their quality simply leaves them even less able to compete with the public sector. The for-profits are built on growth to an even greater degree than traditional higher ed is. When they're turning massive profits, they can grow at an astonishing rate. But when the profits slip, there's nothing left to hold them up. I saw this in my own time at Proprietary U in the late 90's and early 00's. In the late 90's, it grew at a breathtaking rate; in the early 2000's, it took a series of body blows. Since the for-profits are typically more specialized and run on a quarterly-return basis, they're subject to vertiginous swings of fortune. (Of course, one could always try to go upscale with a for-profit. I'm still waiting for this to be done right.) They're quick to build, and quick to dismantle. You heard it here first.
In the meantime, though, things could get ugly. Like the lions in winter I mentioned yesterday, these institutions won't go quietly into the good night. They'll go down swinging. And when they start swinging wildly, they'll do real damage.
In the best case, they'll latch onto other institutions to survive. I wouldn't be at all surprised to see more private colleges form two-plus-two partnerships with local community colleges in attempts to tap into the pipeline of cc grads. This is constructive, as far as it goes, and I'm happy to help it happen.
On the downside, though, I expect to see increasing liberties taken in the name of economic survival. Standards will be lowered, financial aid guidelines will be stretched, faculties will be adjuncted-out, students will be overtly catered to and covertly fleeced. All of which is terrible for the students, of course, but which will also exert downward pressure on competing institutions. It won't last forever -- death spirals don't -- but the process won't be pretty.
If we're smart, we could reduce the future damage by putting some solid controls on the current situation. The for-profits will do pretty much whatever they can get away with; if you want to limit the possible damage they could inflict, you need to improve (and enforce!) regulation. Among the nonprofits, moving away from unsustainable conceits like tenure and the credit hour and towards meaningful measures of actual learning is obviously necessary over the long term. There, too, I'd expect the process to be ugly, but necessity is a mother.
Or we could curse the sun for rising, hold our collective breath until we turn blue, and wait for the pop. Maybe this time will be different!
That's where bubbles come from.
I've read several commentaries recently asserting that we're in one. The bill of particulars usually includes some mix of the following:
- Enrollments in colleges and universities are the highest they've ever been.
- Tuition is the highest it has ever been, even after inflation, and it's increasing much faster than inflation.
- In the Great Recession, many new graduates are simply unable to pay back their impressively high student loans.
- Private lenders rushed into the student loan market a few years ago, covering the gap between what the government would lend and what colleges charged. As with the housing market, you can't multiply leverage forever.
- The payoff to college degrees declines as degrees become more commonplace. Some people respond to that by going for ever-higher levels of degrees, in a sort of credentialist arms race.
Rebuttals usually include some of the following:
- Recessions don't last forever. A temporary blip does not a long-term change make.
- The net payoff, in salary terms, for a college degree is still strongly positive.
- Some of the tuition increases are the result of public disinvestment (or "cost-shifting"), rather than out-of-control spending.
- In an information-based economy, some increased demand for education is rational.
There's merit in both sets of arguments, but they strike me as answering the wrong question. For what it's worth, I'd suggest that the error is in treating "higher education" as if it's one thing. It's actually a series of different things, some sustainable and some not.
At a really basic level, I'd divide degree-granting colleges into four groups: high-cost high-prestige, high-cost low-prestige, low-cost high-prestige, and low-cost low-prestige. The high-cost high-prestige places -- think Harvard and Yale -- will be fine. They have more money than God, and they sell exclusivity.
Low-cost high-prestige places -- the public Ivies, say -- will be fine if they can keep up their perceived quality. There's always a market for a good deal.
Low-cost low-prestige places -- community colleges leap to mind -- will be fine if they can shift the ground of conversation from prestige to outcomes. A community college that does a good job at the first two years of a degree (or a two-year occupational degree) is a great deal; students who graduate from locally-respected programs in Nursing or criminal justice can find good jobs (in normal times) at minimal cost. And if the cc does general education well, it can become the first half of a low-cost high-prestige program. For a kid who's basically talented but still unfocused, doing the first two years at a cc before transferring to someplace good can make a world of sense, and can greatly reduce student loan burdens. (Conversely, a community college that does a lousy job at the first two years has no compelling reason to exist.)
But then there's that pesky high-cost low-prestige sector. Not to put too fine a point on it, but these places are in very deep trouble. This is where the 'bubble' argument has real merit.
As a parent, I can see the argument for spending 50 grand a year to go to Princeton. I don't see the argument to spend 50 grand a year on St. Nobody College, or on Proprietary U.
The rising number of 18-year-olds, years of cheap credit, and then the Great Recession, combined to temporarily mask some of the issues in that sector. But as the recession recedes and the number of high school grads starts to drop again, these colleges will be exposed.
In the long run, this is probably a good thing. The model of high-cost low-return doesn't make a hell of a lot of sense, and this sector will be less able to weather storms than others. The tuition-driven non-profits don't have much cushion against bad times, and watering down their quality simply leaves them even less able to compete with the public sector. The for-profits are built on growth to an even greater degree than traditional higher ed is. When they're turning massive profits, they can grow at an astonishing rate. But when the profits slip, there's nothing left to hold them up. I saw this in my own time at Proprietary U in the late 90's and early 00's. In the late 90's, it grew at a breathtaking rate; in the early 2000's, it took a series of body blows. Since the for-profits are typically more specialized and run on a quarterly-return basis, they're subject to vertiginous swings of fortune. (Of course, one could always try to go upscale with a for-profit. I'm still waiting for this to be done right.) They're quick to build, and quick to dismantle. You heard it here first.
In the meantime, though, things could get ugly. Like the lions in winter I mentioned yesterday, these institutions won't go quietly into the good night. They'll go down swinging. And when they start swinging wildly, they'll do real damage.
In the best case, they'll latch onto other institutions to survive. I wouldn't be at all surprised to see more private colleges form two-plus-two partnerships with local community colleges in attempts to tap into the pipeline of cc grads. This is constructive, as far as it goes, and I'm happy to help it happen.
On the downside, though, I expect to see increasing liberties taken in the name of economic survival. Standards will be lowered, financial aid guidelines will be stretched, faculties will be adjuncted-out, students will be overtly catered to and covertly fleeced. All of which is terrible for the students, of course, but which will also exert downward pressure on competing institutions. It won't last forever -- death spirals don't -- but the process won't be pretty.
If we're smart, we could reduce the future damage by putting some solid controls on the current situation. The for-profits will do pretty much whatever they can get away with; if you want to limit the possible damage they could inflict, you need to improve (and enforce!) regulation. Among the nonprofits, moving away from unsustainable conceits like tenure and the credit hour and towards meaningful measures of actual learning is obviously necessary over the long term. There, too, I'd expect the process to be ugly, but necessity is a mother.
Or we could curse the sun for rising, hold our collective breath until we turn blue, and wait for the pop. Maybe this time will be different!
That's where bubbles come from.
Thursday, June 10, 2010
Lions in Winter
This post by Tenured Radical is one of the best things I've read in a long time. It was occasioned by the semi-forced retirement of Helen Thomas, the journalist whose comments about Israel and Palestine ended her career, but the part that spoke to me was the part about the Venerable Tenured Icon who had gone badly off the rails. It's worth quoting at length:
Other than the "claim their hands are tied" part -- I'm calling bullshit on that, since their hands really are tied -- this is spot-on.
Since the Supreme Court decided -- absurdly, in my view -- that tenure is fine but mandatory retirement isn't, there's literally no way to push the declining self-caricature out the door short of a documented public meltdown. Of course, by the time that happens, there has typically been a long train of abuses that either weren't public or weren't quite enough in themselves, as documented, to stand up in court. (Part of that usually has to do with the power that senior faculty have, and the fear that others have of that power. Fear of retaliation for coming forward is powerful, and it prevents the effective documentation of some very real behaviors.) And the combination of age discrimination laws, tenure, unions, the ADA, and public sympathy can make it effectively impossible for even a conscientious administrator to solve the problem.
Imagine trying to thread that needle. You get reports of memory lapses and random belligerence. The former are explained as side effects of a medical condition that now you have to accommodate. The latter are explained as longstanding personality traits. Plus you get grieved or sued on the basis of age/disability discrimination for even bringing it up. Now you're the bad guy, the declining jerk sticks around, and the folks who actually stepped forward to report things are walking on eggshells. Fairness is served how, exactly?
Part of the issue is a basic category mistake. Colleges don't exist to benefit the faculty. They exist to benefit the students. If you start from that basic proposition, then the burden of proof on performance rightly shifts from "prove I'm not performing" to "prove you are." If a formerly-helpful professor just isn't getting it done anymore, his students suffer. Demagoguery about "age discrimination" doesn't change that. If the students are suffering, I have a hard time justifying why the professor is still there. But as far as the law is concerned, the college exists for the benefit of tenured faculty, so the rules are backwards.
The usual suggested palliatives -- post-tenure review, say -- are nowhere near enough to get the job done. The premise of post-tenure review is that the person reviewed is capable of doing better. If the capacity for improvement is gone, the point is lost. It's like telling a short person to get taller.
In other lines of work, where tenure doesn't exist and performance is easier to measure, this is less of an issue. Ken Griffey Jr. was one of the best baseball players of the 90's, but he had to hang up his cleats this year because he just couldn't get it done anymore. The team didn't have to prove its case; the evidence was obvious and beyond dispute. In quick-turnaround jobs like sales, the feedback is quick enough that arguments from history simply don't count; if your current numbers suck, you're done.
But since the courts have held that tenure amounts to ownership of the job, the legal standard you have to meet is far beyond mere performance measurement. You have to meet a legal standard severe enough to expropriate somebody. It almost never happens, and that's by design.
When I discuss this topic, I usually get accused of ageism. The charge is diversionary and slanderous. I have personally seen professors repeat sections of lecture because they've forgotten what they've said. I've seen professors drool during long, silent, painful, pauses in class. I've endured meetings in which "senior moments" were so numerous that they actually became a running joke. I've had students complain that "he's a nice guy, but he has repeated the same lecture three classes in a row." It's all real. And the abuse that lions in winter dole out to support staff and random students is astonishing.
Nobody designed tenure to last forever. The classic AAUP statement on tenure references a "normal retirement age," and there's a reason for that. If you are going to have a tenure system -- a monstrous 'if,' but let's go with it for the time being -- there has to be an expiration date. There always was.
The alternative is to let the lions in winter roar, and maul, and abuse students and staff. It's wrong. And in the current landscape, it's unavoidable. If we insist on keeping a tenure system, we need to have a serious conversation -- by which I mean one that acknowledges reality and doesn't resort to demagoguery -- about when and how to push people out the door. We've had 16 years of relying on people to do the right thing voluntarily, and the results are in. It hasn't worked. Some lions don't go gently, but they have to go.
This is, of course, a common problem in the academy. Venerable professor famous for irascible personality and eclectic remarks goes right over the edge one day and has to be forcibly retired, when in fact the signs of ineffectiveness and mental decline have been clear to close colleagues for several years: inappropriate remarks, fits of rage and/or confusion, memory lapses of gargantuan proportions. And yet, you go to the administration and say, "Hey, I think we have a problem" and administrators claim their hands are tied because of tenure, academic freedom, blah, blah, blah. I have a friend who made this lonesome trek year after year, recounting numerous horror stories that appeared in the teaching evaluations or were related by befuddled students about Famous Professor X, and was repeatedly sent away with a condescending lecture about age discrimination. In one of these meetings, an administrator said to my friend sharply, "Are you a doctor? What makes you think you know what is going on?"
"Oh," s/he replied casually: "Venerable Professor doesn't recognize me anymore, and s/he recently asked the administrative assistant who she was and why she was robbing the department office." Needless to say, nothing happened until said faculty member let loose a blistering stream of muddled hate speech at a stunned group of first-year students who fled the room weeping and dropped the class en masse.
Other than the "claim their hands are tied" part -- I'm calling bullshit on that, since their hands really are tied -- this is spot-on.
Since the Supreme Court decided -- absurdly, in my view -- that tenure is fine but mandatory retirement isn't, there's literally no way to push the declining self-caricature out the door short of a documented public meltdown. Of course, by the time that happens, there has typically been a long train of abuses that either weren't public or weren't quite enough in themselves, as documented, to stand up in court. (Part of that usually has to do with the power that senior faculty have, and the fear that others have of that power. Fear of retaliation for coming forward is powerful, and it prevents the effective documentation of some very real behaviors.) And the combination of age discrimination laws, tenure, unions, the ADA, and public sympathy can make it effectively impossible for even a conscientious administrator to solve the problem.
Imagine trying to thread that needle. You get reports of memory lapses and random belligerence. The former are explained as side effects of a medical condition that now you have to accommodate. The latter are explained as longstanding personality traits. Plus you get grieved or sued on the basis of age/disability discrimination for even bringing it up. Now you're the bad guy, the declining jerk sticks around, and the folks who actually stepped forward to report things are walking on eggshells. Fairness is served how, exactly?
Part of the issue is a basic category mistake. Colleges don't exist to benefit the faculty. They exist to benefit the students. If you start from that basic proposition, then the burden of proof on performance rightly shifts from "prove I'm not performing" to "prove you are." If a formerly-helpful professor just isn't getting it done anymore, his students suffer. Demagoguery about "age discrimination" doesn't change that. If the students are suffering, I have a hard time justifying why the professor is still there. But as far as the law is concerned, the college exists for the benefit of tenured faculty, so the rules are backwards.
The usual suggested palliatives -- post-tenure review, say -- are nowhere near enough to get the job done. The premise of post-tenure review is that the person reviewed is capable of doing better. If the capacity for improvement is gone, the point is lost. It's like telling a short person to get taller.
In other lines of work, where tenure doesn't exist and performance is easier to measure, this is less of an issue. Ken Griffey Jr. was one of the best baseball players of the 90's, but he had to hang up his cleats this year because he just couldn't get it done anymore. The team didn't have to prove its case; the evidence was obvious and beyond dispute. In quick-turnaround jobs like sales, the feedback is quick enough that arguments from history simply don't count; if your current numbers suck, you're done.
But since the courts have held that tenure amounts to ownership of the job, the legal standard you have to meet is far beyond mere performance measurement. You have to meet a legal standard severe enough to expropriate somebody. It almost never happens, and that's by design.
When I discuss this topic, I usually get accused of ageism. The charge is diversionary and slanderous. I have personally seen professors repeat sections of lecture because they've forgotten what they've said. I've seen professors drool during long, silent, painful, pauses in class. I've endured meetings in which "senior moments" were so numerous that they actually became a running joke. I've had students complain that "he's a nice guy, but he has repeated the same lecture three classes in a row." It's all real. And the abuse that lions in winter dole out to support staff and random students is astonishing.
Nobody designed tenure to last forever. The classic AAUP statement on tenure references a "normal retirement age," and there's a reason for that. If you are going to have a tenure system -- a monstrous 'if,' but let's go with it for the time being -- there has to be an expiration date. There always was.
The alternative is to let the lions in winter roar, and maul, and abuse students and staff. It's wrong. And in the current landscape, it's unavoidable. If we insist on keeping a tenure system, we need to have a serious conversation -- by which I mean one that acknowledges reality and doesn't resort to demagoguery -- about when and how to push people out the door. We've had 16 years of relying on people to do the right thing voluntarily, and the results are in. It hasn't worked. Some lions don't go gently, but they have to go.
Wednesday, June 09, 2010
"Can You Tell Me Why I Didn't Get the Job?"
The short answer is no.
The longer answer is complicated.
Over the last year, I've had more candidates ask me this than I had in the previous several years combined. I suspect it's a function of the abruptly-worse job market, in which people who might have been shoo-ins in the past unexpectedly fall short. I've heard it asked out of apparently sincere bafflement, in an I'm-trying-to-trip-you-up tone, and in indignant anger. I can't answer any of them.
I hate the question, because it's one of those times when the ethical impulse and the legal impulse conflict. As much as I'd like to answer the question in some cases, there's nothing to be gained for the college by doing it, and potentially a lot to lose. So I fall back on something like "it was a very strong pool," which is true but not revealing (or helpful).
Ideally, I'd be able to say things like "your answer to x suggested that you're settling for this job, and other candidates seemed actually to want it," or "you didn't really answer question x." But it's hard to know where an answer will go once it's given. Someone who is honestly looking for tips to improve might decide later that the reason given sounded discriminatory, and will use those words against the college. Or, she might try to argue the points, and you don't want to get sucked into that conversation. (I learned in my teen years that you can't argue your way out of "dumped.") Every statement you elaborate can, and may, be used against you in court.
Although there have been times when I wished I could have shared some pointers, at the end of the day the question is lawsuit bait. "The committee thought another candidate had more range." Well, was the range desired indicated in the ad? If not, is it a pretext? "We went with an internal candidate." So the search was never really valid in the first place? "It really came down to fit." So you don't hire my kind? "One member of the committee hated you." So much for confidentiality...
I could try to dodge any sinister readings by simply piling on the negativity, but there's such a thing as adding insult to injury. And much of the time, it wouldn't be true. Typically, everyone who makes it to the finalist stage is strong, and I have to admit that many cases come down to degrees of excellence or who better complements the department as it's currently composed. Losing doesn't mean you're a loser.
Then there's the epistemological issue. Decisions made by committee can be hard to pin down to clear reasons. Why did the 3-2 vote go the way it did? Legal scholars make careers trying to suss out the real reasons behind 5-4 Supreme Court decisions, and they at least have the advantage of having written opinions to parse. I can theorize as to why someone on a committee voted the way she did, but if pushed, I'd have to admit that it's mostly speculation. And anyone who has served on a contentious committee can attest that the results of votes can be surprising. Committee deliberations can go in unanticipated directions, and group dynamics take on lives of their own. Attributing a committee decision to a single post-hoc reason is usually reductive at best, if not simply arrogant. And getting sued over a speculative post-hoc statement that wasn't even accurate is a colossal waste of resources.
None of this is terribly helpful to spurned candidates; I get that. Making it to the finalist stage, and then getting shot down without a reason given, can be frustrating. (I've been there.) But most of the time, it's the least-bad response to a difficult situation.
Wise and worldly readers, have you seen (or received) an answer that actually helped?
The longer answer is complicated.
Over the last year, I've had more candidates ask me this than I had in the previous several years combined. I suspect it's a function of the abruptly-worse job market, in which people who might have been shoo-ins in the past unexpectedly fall short. I've heard it asked out of apparently sincere bafflement, in an I'm-trying-to-trip-you-up tone, and in indignant anger. I can't answer any of them.
I hate the question, because it's one of those times when the ethical impulse and the legal impulse conflict. As much as I'd like to answer the question in some cases, there's nothing to be gained for the college by doing it, and potentially a lot to lose. So I fall back on something like "it was a very strong pool," which is true but not revealing (or helpful).
Ideally, I'd be able to say things like "your answer to x suggested that you're settling for this job, and other candidates seemed actually to want it," or "you didn't really answer question x." But it's hard to know where an answer will go once it's given. Someone who is honestly looking for tips to improve might decide later that the reason given sounded discriminatory, and will use those words against the college. Or, she might try to argue the points, and you don't want to get sucked into that conversation. (I learned in my teen years that you can't argue your way out of "dumped.") Every statement you elaborate can, and may, be used against you in court.
Although there have been times when I wished I could have shared some pointers, at the end of the day the question is lawsuit bait. "The committee thought another candidate had more range." Well, was the range desired indicated in the ad? If not, is it a pretext? "We went with an internal candidate." So the search was never really valid in the first place? "It really came down to fit." So you don't hire my kind? "One member of the committee hated you." So much for confidentiality...
I could try to dodge any sinister readings by simply piling on the negativity, but there's such a thing as adding insult to injury. And much of the time, it wouldn't be true. Typically, everyone who makes it to the finalist stage is strong, and I have to admit that many cases come down to degrees of excellence or who better complements the department as it's currently composed. Losing doesn't mean you're a loser.
Then there's the epistemological issue. Decisions made by committee can be hard to pin down to clear reasons. Why did the 3-2 vote go the way it did? Legal scholars make careers trying to suss out the real reasons behind 5-4 Supreme Court decisions, and they at least have the advantage of having written opinions to parse. I can theorize as to why someone on a committee voted the way she did, but if pushed, I'd have to admit that it's mostly speculation. And anyone who has served on a contentious committee can attest that the results of votes can be surprising. Committee deliberations can go in unanticipated directions, and group dynamics take on lives of their own. Attributing a committee decision to a single post-hoc reason is usually reductive at best, if not simply arrogant. And getting sued over a speculative post-hoc statement that wasn't even accurate is a colossal waste of resources.
None of this is terribly helpful to spurned candidates; I get that. Making it to the finalist stage, and then getting shot down without a reason given, can be frustrating. (I've been there.) But most of the time, it's the least-bad response to a difficult situation.
Wise and worldly readers, have you seen (or received) an answer that actually helped?
Tuesday, June 08, 2010
"Get Your Gen Eds Out of the Way"
Although I know I'm tempting the speech-code-police to come after me, I'll admit that if I were king of higher ed for a day, I'd ban the phrase "get your gen eds out of the way."
It's one of those phrases that well-meaning advisors use to try to help students plan their schedules. But I'm convinced it does untold damage.
"Gen Eds" are the courses outside your major that you're required to take to get your degree. English composition, math, and suchlike are typically required of students in almost every degree program. The idea behind the requirement is to ensure that every college graduate has at least some fluency in the basics of what most of us expect an educated person to be able to do or to know. (On a less exalted level, gen ed distribution requirements often also constitute a de facto jobs program for faculty in certain disciplines. Anyone who doubts this is invited to sit in on a faculty senate discussion of changes to gen ed requirements.) Even if your major is Early Childhood Ed or Marketing, you should still be able to write clearly and handle grownup math. I'd argue that something like "Intro to American Government" should be a requirement, as should something like "Life Economics," but that's me.
Of course, many students perceive the requirements as entirely pointless, or as a form of hazing. They want to get right to the good stuff, or at least to what they perceive as the useful stuff, and they resent having to take anything else. Anyone who has taught those classes knows the frustration of hitting a wall of "why do we have to take this class?" on the first day.
In dealing with both faculty and staff, though, I see plenty of well-meaning people throughout the college who actually feed that cynicism. In helping students navigate degree requirements and cobble together schedules that work, it's easy to go native and adopt the perspective of the student a little too uncritically. In dealing with a skeptical student, the "get your gen eds out of the way" line can function as a sort of hook; it acknowledges some of the student's perspective, in the service of pulling the student along. Sometimes, that can work.
But students are quick to pick up mixed messages. And once the well is poisoned, getting it clean again isn't easy.
Wise and worldly readers, I seek your counsel. Have you seen an effective and non-patronizing way to reduce self-defeating messages like "get your gen eds out of the way" on your campus?
It's one of those phrases that well-meaning advisors use to try to help students plan their schedules. But I'm convinced it does untold damage.
"Gen Eds" are the courses outside your major that you're required to take to get your degree. English composition, math, and suchlike are typically required of students in almost every degree program. The idea behind the requirement is to ensure that every college graduate has at least some fluency in the basics of what most of us expect an educated person to be able to do or to know. (On a less exalted level, gen ed distribution requirements often also constitute a de facto jobs program for faculty in certain disciplines. Anyone who doubts this is invited to sit in on a faculty senate discussion of changes to gen ed requirements.) Even if your major is Early Childhood Ed or Marketing, you should still be able to write clearly and handle grownup math. I'd argue that something like "Intro to American Government" should be a requirement, as should something like "Life Economics," but that's me.
Of course, many students perceive the requirements as entirely pointless, or as a form of hazing. They want to get right to the good stuff, or at least to what they perceive as the useful stuff, and they resent having to take anything else. Anyone who has taught those classes knows the frustration of hitting a wall of "why do we have to take this class?" on the first day.
In dealing with both faculty and staff, though, I see plenty of well-meaning people throughout the college who actually feed that cynicism. In helping students navigate degree requirements and cobble together schedules that work, it's easy to go native and adopt the perspective of the student a little too uncritically. In dealing with a skeptical student, the "get your gen eds out of the way" line can function as a sort of hook; it acknowledges some of the student's perspective, in the service of pulling the student along. Sometimes, that can work.
But students are quick to pick up mixed messages. And once the well is poisoned, getting it clean again isn't easy.
Wise and worldly readers, I seek your counsel. Have you seen an effective and non-patronizing way to reduce self-defeating messages like "get your gen eds out of the way" on your campus?
Monday, June 07, 2010
Wal-Mart University? Really?
I take a week off from blogging, and Wal-Mart announces that it's entering higher education! I can't leave you people alone for one minute...
Anyway, it appears that Wal-Mart is entering into an agreement with the American Public University system -- which is for-profit, not public -- to offer its employees a group rate on any of several online degrees. Wal-Mart has a history of hiring from within, but many of its front-line staff don't have the educational background to move up, so this is a way for the company to grow its own.
A little quick research revealed that American Public University is an entirely online operation with a history of specializing in serving military personnel. (The system is comprised of American Military University and American Public University.) That may explain why its "arts and humanities" offerings include "air warfare" and "civil war studies" but not, say, poli sci. It's regionally accredited by the Higher Learning Commission of the North Central Association, which is the same agency that accredits the University of Michigan and Northwestern. (It's also the agency that accredits the University of Phoenix and DeVry.) As such, it's eligible for Federal financial aid, and its students will have a legitimate expectation of transfer credit, should they try.
This is a bit of an inkblot test for commenters, so without tipping my hand overly much, a few opening thoughts:
- Even with the discount, the cost to the student for an Associate's degree is still higher than the cost at most community colleges. Community colleges with robust online offerings should be more than competitive here, if the prospective students know about them. I'm just sayin'.
- It doesn't appear that APU has any full-time faculty, at least from a quick perusal of the website. (I'm open to correction on this.) That doesn't seem to be an issue with North Central, judging by its accreditation of Rio Salado College, but it certainly raises a question. Might it be time for North Central to reconsider some of its standards?
- As a national system, APU can go into territories usually covered by, say, Middle States or SACS, and fly under the banner of North Central. (It's the same idea as a cruise ship flying a Liberian flag.) To the extent that Middle States and/or SACS have more stringent requirements in some areas, APU may have a competitive advantage on cost, at least at the four-year level.
- The real eyebrow-raiser for me was the offer of academic credits for Wal-Mart work experience. Apparently, the ethics training Wal-Mart provides its employees will form the basis for some academic credits. I'll repeat that for emphasis. The ethics training conducted by Wal-Mart will be given academic credit. Just let that one sink in for a few minutes.
- The gift of academic credit for work experience may be a form of golden handcuffs. I'd be alarmed if those credits were accepted in transfer just about anywhere. Since students wouldn't want to lose credits, they'd have to see the APU program through. It's like a non-transferable coupon.
- In thinking about the appeal of the program for Wal-Mart employees, I kept reflecting on the fact that the program was originally designed for soldiers on active duty. I understand the need for temporal flexibility with soldiers on active duty, and I have no issue with it. In a war zone, things happen when they happen. But what does it say about Wal-Mart as an employer that its employees need the same level of special accommodation as soldiers on active duty? Low wages are bad enough, but low wages combined with fluid hours just add insult to injury. If Wal-Mart wanted to encourage its employees to stick around, one way to do it would be to offer more stable and predictable hours. Let people plan their lives more than a week in advance. Afghanistan is a war zone, but the home and garden department isn't.
- A few years ago, Wal-Mart started moving into financial services. Combine Wal-Mart U with Wal-Mart private student loans to Wal-Mart employees, and we're getting uncomfortably close to the old "company town" model. This hasn't happened yet, but it wouldn't take much more than a nudge to get there. Wal-Mart could even garnish the pay of its employees who fall behind on loan payments! Once the logic starts to unfold, it's remarkably difficult to stop.
- As easy as knee-jerk indignation is -- I'll admit to some of that in bullet point four, above -- the real burden on public or non-profit private higher ed is to explain what it has to offer that Wal-Mart U doesn't. Within the world of higher ed, we may take things like "adjunct percentages" seriously, but that's because it's a bread-and-butter issue for us. For the twenty-something cart shagger looking to get ahead, the "adjunct percentage" debate is entirely abstract. If APU helps him move up in the organization, and the logistics are manageable, who's to blame him? Once we get the throat-clearing out of the way, we need to be able to explain -- convincingly, briefly, repeatedly, and correctly -- why what we offer is better. Either that, or we need to prepare to have our collective lunch eaten.
Wise and worldly readers, what do you make of the prospect of Wal-Mart U?
Anyway, it appears that Wal-Mart is entering into an agreement with the American Public University system -- which is for-profit, not public -- to offer its employees a group rate on any of several online degrees. Wal-Mart has a history of hiring from within, but many of its front-line staff don't have the educational background to move up, so this is a way for the company to grow its own.
A little quick research revealed that American Public University is an entirely online operation with a history of specializing in serving military personnel. (The system is comprised of American Military University and American Public University.) That may explain why its "arts and humanities" offerings include "air warfare" and "civil war studies" but not, say, poli sci. It's regionally accredited by the Higher Learning Commission of the North Central Association, which is the same agency that accredits the University of Michigan and Northwestern. (It's also the agency that accredits the University of Phoenix and DeVry.) As such, it's eligible for Federal financial aid, and its students will have a legitimate expectation of transfer credit, should they try.
This is a bit of an inkblot test for commenters, so without tipping my hand overly much, a few opening thoughts:
- Even with the discount, the cost to the student for an Associate's degree is still higher than the cost at most community colleges. Community colleges with robust online offerings should be more than competitive here, if the prospective students know about them. I'm just sayin'.
- It doesn't appear that APU has any full-time faculty, at least from a quick perusal of the website. (I'm open to correction on this.) That doesn't seem to be an issue with North Central, judging by its accreditation of Rio Salado College, but it certainly raises a question. Might it be time for North Central to reconsider some of its standards?
- As a national system, APU can go into territories usually covered by, say, Middle States or SACS, and fly under the banner of North Central. (It's the same idea as a cruise ship flying a Liberian flag.) To the extent that Middle States and/or SACS have more stringent requirements in some areas, APU may have a competitive advantage on cost, at least at the four-year level.
- The real eyebrow-raiser for me was the offer of academic credits for Wal-Mart work experience. Apparently, the ethics training Wal-Mart provides its employees will form the basis for some academic credits. I'll repeat that for emphasis. The ethics training conducted by Wal-Mart will be given academic credit. Just let that one sink in for a few minutes.
- The gift of academic credit for work experience may be a form of golden handcuffs. I'd be alarmed if those credits were accepted in transfer just about anywhere. Since students wouldn't want to lose credits, they'd have to see the APU program through. It's like a non-transferable coupon.
- In thinking about the appeal of the program for Wal-Mart employees, I kept reflecting on the fact that the program was originally designed for soldiers on active duty. I understand the need for temporal flexibility with soldiers on active duty, and I have no issue with it. In a war zone, things happen when they happen. But what does it say about Wal-Mart as an employer that its employees need the same level of special accommodation as soldiers on active duty? Low wages are bad enough, but low wages combined with fluid hours just add insult to injury. If Wal-Mart wanted to encourage its employees to stick around, one way to do it would be to offer more stable and predictable hours. Let people plan their lives more than a week in advance. Afghanistan is a war zone, but the home and garden department isn't.
- A few years ago, Wal-Mart started moving into financial services. Combine Wal-Mart U with Wal-Mart private student loans to Wal-Mart employees, and we're getting uncomfortably close to the old "company town" model. This hasn't happened yet, but it wouldn't take much more than a nudge to get there. Wal-Mart could even garnish the pay of its employees who fall behind on loan payments! Once the logic starts to unfold, it's remarkably difficult to stop.
- As easy as knee-jerk indignation is -- I'll admit to some of that in bullet point four, above -- the real burden on public or non-profit private higher ed is to explain what it has to offer that Wal-Mart U doesn't. Within the world of higher ed, we may take things like "adjunct percentages" seriously, but that's because it's a bread-and-butter issue for us. For the twenty-something cart shagger looking to get ahead, the "adjunct percentage" debate is entirely abstract. If APU helps him move up in the organization, and the logistics are manageable, who's to blame him? Once we get the throat-clearing out of the way, we need to be able to explain -- convincingly, briefly, repeatedly, and correctly -- why what we offer is better. Either that, or we need to prepare to have our collective lunch eaten.
Wise and worldly readers, what do you make of the prospect of Wal-Mart U?
Subscribe to:
Posts (Atom)