Friday, October 21, 2005

 

The Kubler-Ross Stages of Outcomes Assessment

If you want to make a roomful of faculty groan, all you have to do is mention ‘outcomes assessment.’ (Make sure nobody is armed with large fruit, or at least put on a helmet first.)

‘Outcomes assessment’ is a mandate set by the regional accreditation agencies. It’s a catchall term for a systematic process for continual improvement of student ‘learning outcomes’ over time. Academic units (departments or programs) are supposed to establish mechanisms for determining the skills they want students to acquire in the program; to measure the acquisition of those skills; and to use any shortfalls as spurs to curricular change to maximize future student performance. It’s a sort of decentralized TQM, deployed as a way to dodge really ugly standardized tests imposed from the outside.

Having been through this at two colleges now, I’m fairly sure it’s possible to map faculty reactions on the Kubler-Ross scale.

1. Denial. “We already have assessments. We give grades!” Like most common-sense brushoffs, this seemingly-persuasive objection misses the point completely. Grades are based on performance in a particular class, and they’re usually assigned by the teacher of the class. If a teacher knows that his performance will be judged based on his students’ grades, all he has to do is inflate the grades. Not that anybody would ever do that.

Assessment is about curricula, not individual courses or individual students. This is why it has to be separate from individual courses, and why it rubs faculty altogether the wrong way. (It’s also why institutions that accept a large number of transfer credits will have chronic problems with assessment – it’s never entirely clear just whose program they’re assessing.)

2. Anger. “This is just a way to ‘get’ professors.” “It’s not my fault the students are dumb as bricks.” “*(($&*#^ administration!” At a really basic level, faculty will perceive the call for outcomes assessment as insulting. They are not entirely wrong, but not in the way they think. The point of assessment is to measure program outputs; faculty are used to focusing on program inputs. Since any experienced teacher can tell you that what professors say, and what students hear, often bear only a vague relationship to each other, faculty correctly perceive great risk in finding out what students actually take from a program. That said, if students don’t take anything from a program, I’d be hard-pressed to justify its continued existence.

3. Bargaining. In academia, this takes the form of minimizing. “Can we just look at graduate placement stats?” “Can’t we just do a few focus groups?” “Will a scantron test be enough?” Minimizing is a very, very common strategy for saying ‘yes’ while meaning ‘no.’ Say ‘yes,’ then do such a pitiful job that the project collapses of its own lack of weight. A savvy dean will keep an eye out for this.

4. Depression. This one is self-explanatory. In academia, the preferred term is ‘faculty morale,’ which is followed by ‘is low’ with the same regularity with which Americans precede the word ‘class’ with the word ‘middle.’

5. Acceptance. Good luck with that.

Assessment is a quixotic enterprise in many ways. Mobile students contaminate the pool, making it difficult to know exactly which inputs are being measured. Pre-tests rarely exist (and students rarely take them seriously anyway), so it’s hard to separate what they picked up from a program from what they brought to it. The measures are inherently reductionist, which is part of what grates faculty -- if I’m teaching 19th century lit, I don’t just want students to develop critical thinking skills (though that’s certainly part of it); I also want them to learn something about 19th century lit. Since assessments are supposed to be content-independent, the implicit message to faculty is that the actual substance of their expertise is irrelevant. No wonder they get crabby.

To add injury to insult, most faculty hate grading, and assessment results need to be ‘graded.’ Even worse, they’re usually done at the end of a semester, when patience is thinnest and time is shortest.

The schools at which I’ve been through this have avoided the temptation to outsource their assessment by purchasing a standardized test. I’m mostly sympathetic, though not to the degree I once was. The virtues of an external test are twofold: the workload issue goes away, and you don’t have to worry about conflict of interest. The relative success of the professions in using external tests (your law school doesn’t grade your bar exam) suggests that there may be something to this.

Still, at its best, assessment is supposed to be an opportunity for faculty to discuss the nuts and bolts of their own programs, with an eye to the students. It should create space for the kind of productive, constructive conversations that are all too rare. It’s just easy to forget that at the end of the semester, with the herniating pile of assignments to grade growing larger with each new late one shoved under the door.

Comments:
We are in the process of making up outcomes for every class in our catalog. Our faculty actually is kind of enjoying it.
 
As one of the folks in charge of assessment in our department, I did indeed let out a hearty groan upon seeing the title of your post. I'm actually in favor of this in principle, for the very reasons you mention - we do need to be attentive to what's working and what's not, and it helps us articulate what we imagine the department to be doing as a whole.

My biggest quarrel with it, however, is that the outcomes we really want are not things that can be quantified at the end of a semester. My own experience as a student with college history courses is that there were several things that only really sank in months or years later, and probably there were several more that sharpened my thinking in ways I'm not always conscious of. We measure our goals in terms of encouraging "appreciation" of the forces that shape human events and "critical evaluation" of sources, and in both cases we hope that these continue to develop long after the end of the semester. The best faculty in our department talk about outcomes in terms of what kinds of human beings we help to shape, and I haven't yet seem a standardized test that evaluates that very well.

So we resign ourselves to inventing half-assed evaluations that measure whether students coming out of our courses know how to write essays and papers, and they produce vague enough results to keep everyone happy without threatening any kind of change, and they take enough time and effort to count as big service points, but the whole thing is more a waste of time than it is productive. (That said, if anyone has good solid suggestions about how to do program assessments, we'd be happy to tackle something that looked more effective!)
 
Every class in your catalog? Yikes! I'd just go with programs, not individual classes.

p/h -- you said it better in three paragraphs than I did in the entire entry. The gap between what assessment could be, in the best of all possible worlds, and what it actually is is demoralizing. If there's a more elegant way to assess, I'm still looking for it...
 
The big problem with these assessments is that to do it right requires a huge investment of time and money.

The "pre-class" assessment is less of a problem then you make it, as all previous classes (at least in the department) are assessments of the student in his or her current class. Even if teachers give different grades, in the long run everything averages out and you can account for individual grading differences if you have a big enough database of previous grades.

Similarly you can account for freshmen using SAT's, ACTs, HS grades and entrance testing.

Still, it takes a huge effort to do all this, and many are threatened by it.
 
Er ... I like assessment ...

But, I think that it's really important to know what is being assessed. What is the level -- if it's department or program, then the outcomes should be attuned to departmental goals and needs. Assessment should be to those outcomes -- so departmental outcomes could be hiring someone in a particular field, increasing overall student enrollment in a program's transfer courses, student retention in said courses, etc. Maybe also a review of courses to see if they address campus-wide graduation outcomes (ours include Info literacy, critical thinking, quantitative reasoning). Department/program outcomes link student outcomes and general institutional outcomes, but are often more institutional.

If done in that sense, a program assessment can be great -- you see where enrollments are, how the students do after they transfer compared to native populations (OK, that's a CC thing, but P/H could reverse it), and generally see where there are places you can improve things. One of those things is justifying new TT positions! Sorry -- I did this a couple of years ago and it taught me so much about how things connected on campus and really gave me a good big picture grip. Assessing departmental outcomes is also very different because most of the assessments are given via statistics and course evaluations -- and professional development reports.

Student outcomes are very different. Conflating them with departmental outcomes is the kiss of death -- people get confused because the picture seems so unrealistic and inoperable. Student outcomes and assessment are generally at the course level and need to be measurable to make sense. What should a student be able to do at the end of the course? Appreciate history isn't an outcome, really. 'Demonstrate an understanding of chronology and major events' is. 'Construct an argument based primary source evidence' is.

A related outcome at the departmental level might be "faculty create assignments that enable students to work to the campus Critical Thinking and Written Communication Outcomes" -- because that's something you can look at and measure.

At least, that's how I understand it. Can you tell I spent three hours in assessment workshops yeaterday?
 
The freshman composition program in which I taught for five years did not have any kind of comprehensive incoming assessment protocol that could tell instructors where a given student's strengths and weaknesses lay. Unsurprisingly, this program also did not have an outcome assessment tool, either. While I feel that such assessment protocols could be very useful indeed (if properly implemented), I would have much preferred that my department implement a comprehensive description of what the students were expected to know (and be able to do) as prerequisites for each course in the sequence. So I am posing this question specifically to anyone who teaches freshman writing and composition (though I welcome responses from anyone, particularly those in the humanities and social sciences): does YOUR program provide specific details on what every incoming student is expected to know? (And if so, have you found the description helpful? Why or why not?)

Don't get me wrong--I think that the program I was in was a pretty good program that was constantly improving. And I'm not sure that a description of required basic skills/knowledge would have helped the students all that much because the quality of the students was quite frightening in its variability, and even worse was most students' unwillingness to expend even an iota of effort to learn knowledge and abilities that I considered absolutely essential prerequisites for the course. Whenever I told students that they needed to learn something now that they were already supposed to know, few were impelled to fill the gaps in their knowledge. So perhaps this type of plan would not have helped them.

But I will say this: I would have found such a prereq description enormously helpful for my confidence level if I had been able to tell students that THE PROGRAM, and not just I, expected them to know certain things and that ignorance started a student off on the wrong foot from day one. In addition, a description of prerequisite knowledge and skills for a given course would act as a description of outgoing knowledge and skills for the previous course in the sequence. And it seems to me that a set of descriptions of this type would be the first step in the process of implementing outcomes assessment, if the school or department so desires.

In math and the sciences, it's pretty easy to state what a student should know before starting a course. But how many programs/courses in the social sciences and humanities are that specific? I only have my own limited experiences to draw upon, so I'm very curious to know what other school are doing.
 
I meant, of course, that I would like to know what other SCHOOLS (plural) are doing. Duh.
 
Our regional accreditation folks need every class in the catalog to have outcomes by their next visit which is in 2008.
 
I will freely admit here that I *HATE* assessment. I understand that it's necessary, but the way Dream School does it, it just feeds into that "my taxes pay your salary" attitiude that many of our CC students come in with. Maybe if it was done in a less touchy-feely Kum-Bayah (sp?) way, it wouldn't be so offensive, but for right now, it just blows. And we're supposed to read the reports, too? Pshaw!
 
We also had to work out outcomes for every course and tie them to the program outcomes (which makes sense) and then to the university level outcomes (which were rather generic).

Assessing outcomes is more difficult and the continuing unrelated changes in our programs complicates that.
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?