Wednesday, September 29, 2010
It Boils Down to...
Several years ago, before I got here, the faculty senate adopted a set of several student learning outcomes that they attributed to the ‘general education’ part of degree programs. These are the outcomes that graduates of the various programs, regardless of specialization, should be able to demonstrate. For example, whether your program is Nursing, criminal justice, or music, you should be able to write coherently and clearly. The idea was that by specifying the goals of the gen ed part of the curriculum, the college would have something to measure against to see where improvement was needed.
Yet getting those outcomes from ‘adopted’ to ‘used’ has proved a long, hard slog.
I had thought it was a case of the usual fear of standardization and quantification, along with some level of paranoia about results being used to evaluate individual faculty. I was also prepared for the workload argument (“it’s just one more thing to do”). Instead, I heard that when the outcomes were adopted, it was under duress, and the outcomes had been presented with insinuations that if you (meaning faculty) weren’t at least achieving these, then you weren’t doing your job. Worse, some of them took away the message that the content of their individual courses was irrelevant; these were what education was supposed to “boil down to.”
Ouch. Suddenly the foot-dragging made a lot more sense. In that setting, I’d foot-drag, too.
When I went into my scholarly field, it wasn’t a random choice, or based on a sense that it was uniquely suited for teaching critical thinking. It was because I thought highly of the field; I liked many of its questions, and much of its subject matter. (Methods, admittedly, were another issue.) When I taught my classes to skeptical students, I took conscious steps to address skill development, but I never stopped trying to convey that the subject matter was fascinating in its own right. I never reduced it to a live-action workbook for basic skills, and would have been mortified at the suggestion.
The delicate balance is in respecting the ambitions of the various disciplines, while still maintaining -- correctly, in my view -- that you can’t just assume that the whole of a degree is equal to the sum of its parts. Even if each course works on its own terms, if the mix of courses is wrong, the students will finish with meaningful gaps. Catching those gaps can help you determine what’s missing, which is where assessment is supposed to come in. But there’s some local history to overcome first.
If there’s some boiling down to do, I think here it boils down to trust. If the faculty trust that they’ll be part of the solution to the gaps, I’m guessing they’ll be more receptive to finding the gaps. But as long as the trust gap lingers, they’ll just keep dragging their feet.
That, by itself, would account for foot-dragging.
But I see a great opportunity for you here, the opportunity called "someone else did it". The faculty should know that there is no reason not to revisit them periodically, and sharpen them with the goal of making them say what you try to do in a way that makes them easier to measure as part of the normal process of teaching and grading.