If you want to make a roomful of faculty groan, all you have to do is mention ‘outcomes assessment.’ (Make sure nobody is armed with large fruit, or at least put on a helmet first.)
‘Outcomes assessment’ is a mandate set by the regional accreditation agencies. It’s a catchall term for a systematic process for continual improvement of student ‘learning outcomes’ over time. Academic units (departments or programs) are supposed to establish mechanisms for determining the skills they want students to acquire in the program; to measure the acquisition of those skills; and to use any shortfalls as spurs to curricular change to maximize future student performance. It’s a sort of decentralized TQM, deployed as a way to dodge really ugly standardized tests imposed from the outside.
Having been through this at two colleges now, I’m fairly sure it’s possible to map faculty reactions on the Kubler-Ross scale.
1. Denial. “We already have assessments. We give grades!” Like most common-sense brushoffs, this seemingly-persuasive objection misses the point completely. Grades are based on performance in a particular class, and they’re usually assigned by the teacher of the class. If a teacher knows that his performance will be judged based on his students’ grades, all he has to do is inflate the grades. Not that anybody would ever do that.
Assessment is about curricula, not individual courses or individual students. This is why it has to be separate from individual courses, and why it rubs faculty altogether the wrong way. (It’s also why institutions that accept a large number of transfer credits will have chronic problems with assessment – it’s never entirely clear just whose program they’re assessing.)
2. Anger. “This is just a way to ‘get’ professors.” “It’s not my fault the students are dumb as bricks.” “*(($&*#^ administration!” At a really basic level, faculty will perceive the call for outcomes assessment as insulting. They are not entirely wrong, but not in the way they think. The point of assessment is to measure program outputs; faculty are used to focusing on program inputs. Since any experienced teacher can tell you that what professors say, and what students hear, often bear only a vague relationship to each other, faculty correctly perceive great risk in finding out what students actually take from a program. That said, if students don’t take anything from a program, I’d be hard-pressed to justify its continued existence.
3. Bargaining. In academia, this takes the form of minimizing. “Can we just look at graduate placement stats?” “Can’t we just do a few focus groups?” “Will a scantron test be enough?” Minimizing is a very, very common strategy for saying ‘yes’ while meaning ‘no.’ Say ‘yes,’ then do such a pitiful job that the project collapses of its own lack of weight. A savvy dean will keep an eye out for this.
4. Depression. This one is self-explanatory. In academia, the preferred term is ‘faculty morale,’ which is followed by ‘is low’ with the same regularity with which Americans precede the word ‘class’ with the word ‘middle.’
5. Acceptance. Good luck with that.
Assessment is a quixotic enterprise in many ways. Mobile students contaminate the pool, making it difficult to know exactly which inputs are being measured. Pre-tests rarely exist (and students rarely take them seriously anyway), so it’s hard to separate what they picked up from a program from what they brought to it. The measures are inherently reductionist, which is part of what grates faculty -- if I’m teaching 19th century lit, I don’t just want students to develop critical thinking skills (though that’s certainly part of it); I also want them to learn something about 19th century lit. Since assessments are supposed to be content-independent, the implicit message to faculty is that the actual substance of their expertise is irrelevant. No wonder they get crabby.
To add injury to insult, most faculty hate grading, and assessment results need to be ‘graded.’ Even worse, they’re usually done at the end of a semester, when patience is thinnest and time is shortest.
The schools at which I’ve been through this have avoided the temptation to outsource their assessment by purchasing a standardized test. I’m mostly sympathetic, though not to the degree I once was. The virtues of an external test are twofold: the workload issue goes away, and you don’t have to worry about conflict of interest. The relative success of the professions in using external tests (your law school doesn’t grade your bar exam) suggests that there may be something to this.
Still, at its best, assessment is supposed to be an opportunity for faculty to discuss the nuts and bolts of their own programs, with an eye to the students. It should create space for the kind of productive, constructive conversations that are all too rare. It’s just easy to forget that at the end of the semester, with the herniating pile of assignments to grade growing larger with each new late one shoved under the door.