Wednesday, December 20, 2006
I couldn't agree more. In fact, the sense of ownership distorts the very organization.
The HR director at my current college once defined tenure as the professor owning the job. The very concept of 'owning' a job struck me as risible – do you write your own paycheck? -- but it certainly fits much of the behavior I've seen.
There's a basic contradiction at the heart of many professors' sense of what their job is. On the one hand, they are the owners of the college, lords-and-high-masters of all that is academic. On the other hand, they are not to be bothered with organizational minutae, nuts-and-bolts issues, or (heaven forbid!) any discussion of finances beyond their own raises.
It's one or the other. If you really own the college, then you own the entire college and you attend to all of it. That includes mundane stuff like figuring out how to cover a much-higher-than-expected HVAC or snow-removal bill this year, what percentage of adjuncts to employ, and who to fire if revenues fall short. If you don't want to be bothered with any of that, then you're not an owner; you're an employee. (I've noticed that most of the folks who bray the loudest about faculty sovereignty also run the fastest to the union when they're asked to produce. Owners don't have unions.) There's no shame in that – I'm an employee, as are most people – but implicit in the concept of 'employee' is 'accountability to the employer.' That means answering the call – not mindlessly or endlessly, but answering nonetheless – when the employer needs outcomes assessment, or classes on Tuesday instead of Wednesday, or a course adapted for online delivery.
(Conceivably, one could object that faculty are neither owners nor employees, but independent contractors, on loan from their disciplines. This is even sillier. 'Contractor' implies the existence of a 'contract,' which implies finite duration. Contractors don't have tenure, nor do they have unions. For that matter, they usually pay for their own health insurance, if they have any at all. Contractors don't own their jobs.)
I've found that some department chairs honestly regard their departments as their own personal property. As they see it, they cannot have their personal property removed from them (be removed from a chair position) unless they've committed a crime. This is insane, but common.
I've run into variations on this roadblock repeatedly. In trying to bring something resembling organizational rationality to what had been largely a 'courtier' system, I keep banging into the former nobles' sense of ownership. They're willing to do courtesy consultations with the administration, but as far as they're concerned, the job of the administration is to find them money and sing their praises. Anything beyond that is overstepping, which calls for the ritual “shocked and offended” proclamations.
They just don't get it.
Colleges didn't spring from the mind of Zeus, or drop from the sky fully formed. They're organizations, like any other. They require revenue and budgeting and real effort to generate and maintain the growth that allows the faculty not to get their hands dirty. Colleges don't exist to provide employment to tenured faculty. (The folks over at Curriculum Committee sometimes forget that.) Faculty are employees, and professorships are, among other things, jobs. If the college gets to the point where some of those jobs no longer need doing, then they shouldn't be done.
As near as I can figure, some fairly naïve administrations in the past played along with the sense of ownership as a way to motivate productivity in the absence of actual material accountability. Since raises are contractual and across-the-board, and tenure expires only at death, it can be tough to get some folks to step up. If they feel ownership of their area, they will step up in certain ways.
The catch is that once they've been in the driver's seat for a while, they forget that it was ever otherwise. When organizational needs change – and they do – they take any suggestion of change as an affront. Who are you to tell me to measure outcomes? I built this department! No iceberg could possibly sink my mighty ship!
Tenure isn't ownership, but it can enable the illusion of ownership. Illusions can be fun. But reality has a way of intruding, and no amount of “shocked and offended” will change that.
Just curious. As to your post on ownership, I'll just second the motion and sit down.
The first is a total lack of agreement on what a good outcome would be. The second is a total lack of agreement about how to assess those outcomes. We had long arguments about how to collect data that would be valid.
... but we never compare notes about how we're doing it and there is no coherent expectation that students having taken class "A" should enter class "B" with certain skills.
because as the "owner" of calc-based physics for 6 years, this is something that I have to do. It is also something that I think CC's do better than Uni's. I'll try to be brief.
Data are hard to come by. I have asked repeatedly to get the first-semester engineering GPA of the kids who pass my class, since comparing them to the ones how pass the same class at the Uni is the best "outcome" measure that I can think of. In lieu of that, I rely on the anecdotal feedback from some of the faculty over there, and from students who compare themselves favorably to the competition. IMHO we can learn a lot from data about transfers who passed specific classes at our college.
We compare notes a lot in my division. Two people who teach calculus to my physics students are just a few doors away, and we compare notes on skill sets and course sequencing and what learning is missing (not subject matter, but retention) from the math classes our students have in common. I know that the biologists and chemists and nursing faculty (in another division) do the same thing. In a Uni, these groups would never discuss those matters, let alone every week.
One of our calculus faculty uses an exit exam as one requirement for passing calc I, and sometimes uses it at the start of calc II. This can lead to some interesting discussions about learning outcomes among faculty, and a teachable moment about learning itself with students who will be expected to know that material a year from now without any further review.
CCPhysicist what you suggest would be wonderful. I'd be happy if everyone knew what the learning objectives of the department were. Heck - I'd be happy if we all taught people how to use a microscope the same way. But you can't take people who've been educated to be lone wolves for the first 12 years of their academic career (Ph.D. + post-doc) and have little or no experience teaching and then have them instantly transform into collaborative folk. I also know that there would be some in the department who would reject everything you use as "data" because it would be anecdotal and therefore not meaningful. Never mind that it's the best you can do - if it's not perfect, forget it. This is the sort of time when I ask where exactly it is that they threw that bathwater because I hear a baby screaming.
Part of the problem at the place I work is that very few people teach the same class. Most carve out a niche for themselves and stick to it - it really takes advantage of the diversity of faculty but it creates problems when they go on a sabbatical or get sick. Some individuals are particularly vicious in protecting their curricular "turf". Whenever we lose faculty, it's years before we can make up the loss.
Since I play a role that is not quite faculty, not quite administrator, I know more than most about the guts and inner workings of the University. We have a moderately functional system with enough excess capacity that the adventurous can divert resources their way. What surprises me is how little faculty know about "the system". I've found that it's not really their fault that they don't know how things work - they are allowed to exist in a never never land where resources appear or disappear and no one ever really explains why or how that happened. This gives the department chair a lot of power because he can say "we can't afford this" and no one can question him. But it also means that no one thinks manipulating the system to put more resources in the department's hands they don't know how. For example, people always complain when their enrollment rises - they don't see the direct connection between them grading 25 more finals and them getting that new piece of equipment they've been asking for.
Faculty will never see themselves as employees but I can imagine partnership working well. But administrators would have to be willing to tell faculty more about how things work and faculty would have to be sufficiently interested to listen.
As soon as my livelihood is tied to these outcome assessments I could begin to feel pressure to compromise my standards. For example, I think depth in certain key topics is much more important than breath in my area of science. The national standardized tests out there emphasize the opposite. They cover all the possible topics in the standard intro text book. To make sure that my students do well on the standardize test I'm going to feel some pressure to move towards teaching the test, especially if my job might be in the balance.
There is also fear that I'll be held accountable for outcomes that I have no control over. I can do a lot of hand-holding through the class and get a good result on an end of course test. Once they've transferred, they're on their own and so many other factors other than how well I've prepared them for the next level can effect their GPA. So, I could be potentially penalized because students were distracted by parties or a family crisis. (Note: I'm at a small school so we don't have a very large number of students to track and just one or two students can make a big difference in the percentages.)
In the end, the whole "outcome assessment" thing often is presented in a very fuzzy, poorly defined way. Every class I teach already has outcome assessment. There are tests, labs and homework. Am I being asked to just document this (outside of the syllabus)? Is that really meaningful? I could move include outside measurements of outcome (standardized tests or grades in future courses), but I know that their usefulness is limited. How can I be sure that those above me will understand those limits?
The first thing you need to do is describe your desired outcomes - people who get an "A" can do xyz, while people who get a "B" can do xy and sometimes z etc. Collect data not just from your class but from those classes taught by others. Then you need to do stats. It may be that you never have enough numbers to pass a t-test but at least you would know that - and you might want to increase your sample size (over multiple years) or pick different outcomes. Or maybe there is no statistically significant difference between your A and B students - important to know as it renders your grading system meaningless.
Concerns about the use of the data - yes, I think this could be a problem. But if you are convinced that you are doing a good job, "A" students (and others) should perform at a predetermined level in your course and that should be measurable. If you are not achieving that, you have to ask yourself why - I would suggest that while some things cannot be controlled, for most students you should be able to measure whether or not you are getting through. This also allows you to evaluate changes in your teaching (did it work?).
Finally, our department did turn to a standardized test to look at outcomes for our undergrads and we did this for two reasons. We have a fairly uniform program for certain majors and that allows us to have a pretty good idea as to what we think they should know when they graduate. Second, we expect that our students will have certain career goals, many of which require good performance on standardized exams in our subject so we need to know that they can spit back what they have learned in a way that will let them do the next thing they want to do. We don't have enough data to analyze this statistically yet but the preliminary results are encouraging in that people who did better on the exam had better grades and people who did poorly had poor grades. Trivial perhaps but it shows that there is a difference between our "A" students and our "C" students. We also got to put some of our own questions on the exam to assess retention of information from courses early in the student's academic careers. If you have specific goals for your course, you can assess them in your final exam but it's when you look at the performance of your students over time that you really see how they are retaining and using information. I think good outcomes have to be assessed more broadly than just in the context of one course.
I like that we have some data now. I think the hard part is yet to come - we need to show that we used to data to identify weaknesses in our teaching and make corrections or improve. Wish us luck with that one - most people are convinced that things are just fine the way they are and that if the students aren't learning it's their own darn fault. I wonder if we can do better than that.
I found the bit about "ownership" to be interesting. To argue that faculty should be discouraged from taking ownership is actually counter to the movements we are seeing in the business world. Whatever the quality movement of the day (TQM, JIT, TOC, 6 Sigma, etc) one of the generally recognized tenets is that employees should be empowered to act in their position, and given the authority as well as the responsibility for their work/task/effort.
Generally we call this ownership of their work, and we find that having ownership leads to pride in workmanship, and along with that an increase in quality.
In the academy this seems to me to be even more appropriate. As was pointed out to me in a recent conversation, most departments don't have the luxury of having more than one or two people that are experts in their field of study. These people almost by definition have ownership over their course, because no one else has enough expertise to critique them.
That said, I do feel the greatest empathy for the poor faculty member who's Dean happens to have the same discipline and (horrors) research interests. That faculty member is the one most likely to receive the greatest degree of meddling.
At the community college, almost all of the classes are intro classes, and often this will be the only chemistry class they will ever take. Each year I may have only about 5 or 6 total that will every go on to take a second semester of chemistry followed by sophomore organic chemistry. Out of that group I'll have one student every two or three years that will ever go on to take a junior level chemistry class. So, a majority of my students never have another class that may directly require an understanding the material learned in my class. Thus, measuring my student's progress once they transfer may not really tell me anything other than whether or not they continued to be good students or not. Of course if I have "A" students that are flunking out of Calculus then maybe I should be concerned that I'm going too easy on my students, but I would be hard press to find concrete ways my Gen. Chem. class needs to change to better prepare students for calculus.
Of course, we don't have the access to information that allows us to track specific students once they transfer. We get some data, but it is extremely general. There is very little break down based on which classes they took here and their performance in specific classes at the university.
Of course since the number of chem and chem engineering majors have been small, I've heard back from most over the years. My impression is that student's determination and willingness to work hard are a better prediction of their success then their actual grade in Gen Chem I. My "B" students that worked hard for that grade have gone on to be a lot more successful than the many "A" students that skated by in General Chem. I on a lot of natural intelligence and a good math and science background. As a result I've become a bit skeptical about how much grades in an intro class and standardized testing really predict future outcome.
As for the standardized testing, I realize that collecting data over several years is needed to get meaningful data. I'm not confident that those above me will be as patient before passing judgment. A single bad semester on the test might be enough to bring down pressure from administration to improve or else.
This isn't to say that I don't care whether I am preparing my students well. I take it very seriously and work hard to design tests and other assignments to gauge how well my students are learning. I pay attention to what is being done at other schools. I know very well the topic covered in upper level chemistry classes and the background needed, and I try to provide that for the very few that will go on. I'm also realistic in that a large portion of my students will forget almost everything once they take the final, and for many of those the lack of retention won't make any difference in their success down the road.
So in the end I have mixed feeling about this whole "outcome assessment" trend. On one hand data is good and I'm sure that some of it will be useful. On the other hand I feel that too much will end up be staked to results that have very limited meanings.
We also got to put some of our own questions on the exam to assess retention of information from courses early in the student's academic careers. If you have specific goals for your course, you can assess them in your final exam but it's when you look at the performance of your students over time that you really see how they are retaining and using information.
Retention of information (used to be called learning) beyond the final is what my concern is as well. They need to apply specific parts of it in engineering if they are going to succeed.
That is where I do have some statistically valid data, just not the kind or amount I would like. One group of my alumni in mechanical engineering had a 100% pass rate on a physics/math competency exam that only 20% of "native" students passed on their first try. I define that as a good outcome, as do my math colleagues.
What seems to work is greater depth at the expense of some breadth, a move that is supported by research in physics education (and some new textbooks). I think it also helps that our faculty send a common message across the pre-eng curriculum, starting with chemistry and trig.