Sunday, November 09, 2014

 

Assessment Done Well and Badly


If you haven’t yet seen Jeffrey Alan Johnson’s essay on faculty/administration conflicts over assessment, check it out.  It’s well worth reading, not least because it goes well beyond the usual first-level conflicts over assessment.  (The comments give a pretty good indication of what the usual first-level conflicts are.)  

Johnson’s argument is subtle enough that most commenters seemed to miss it.  In a nutshell, he argues that subjecting existing instruction to the assessment cycle will, by design, change the instruction itself.  Much of the faculty resistance to assessment comes from a sense of threatened autonomy.  Johnson addresses political science specifically, noting that it’s particularly difficult to come up with content-neutral measures in a field without much internal consensus, and with factions that barely speak to each other.  

He’s right, though it may be easier to grasp the point when applied to, say, history.  There’s no single “Intro to History” that most would agree on; each class is the history of something.  The ‘something’ could be a country, a region, a technology, an idea, an art form, or any number of other things, but it has to be something specific.  Judging a historian of China on her knowledge of colonial America would be easy enough, but wouldn’t tell you much of value.  If a history department finds itself judged on “scores” based on a test of the history of colonial America, then it can either resign itself to lousy scores or teach to the test.

The faculty whose subfields or specialities would be sacrificed can be expected, rightly, to object.  The issue isn’t necessarily that they resist any scrutiny or any change -- though, to be fair, some do -- but that the scrutiny is off-point.  

As a political theorist turned administrator, I see Johnson’s argument from both sides.  The need for some sort of thoughtful assessment process goes well beyond accreditation mandates, as important as those are.  The “distribution requirement” model of degrees is built on the assumption that the whole of a program will equal the sum of its courses.  We all know it doesn’t always work out that way, though.  Program-level assessment addresses student outcomes after taking the individual courses serially, and highlights any gaps.  That’s why the chestnut “we already do assessment -- we give grades!” misses the point.  Grades apply to individual students in individual courses.  If the sequence of courses is missing something, that won’t show up in grades.  You might get an “A” in Comparative Politics without knowing anything about political theory.

I’m proud of the model we’ve adopted locally.  We have a Gen Ed Assessment Committee (GEAC, pronounced “geek”) that looks at student work samples, submitted by faculty, and scores them against the five general education learning outcomes the college adopted through the Senate seven years ago.  The members of the GEAC are all faculty, and their workloads are adjusted so they have time to do the job right.  (Whether the adjustment is enough is always a question, but that’s another issue.)  They draw work samples from programs across the curriculum, and make recommendations to the college as a whole.  Their recommendations have been well-received, in part because other faculty respect them as colleagues, and in part because the process makes sense.

Programmatic assessment can be more of a challenge when you don’t have obvious capstone courses.  Transfer-oriented programs often don’t, at the two year level.  But there, again, it makes sense that a program would want to know where it’s doing right by its students and where it’s falling short.  Individual faculty may feel some tension between their own goals and departmental goals, but that’s not the fault of assessment.  In fact, if memory serves, those tensions pre-date the assessment movement pretty substantially.

From an administrative perspective, Johnson’s article offers worthwhile cautions.  If the goal is actual improvement, it’s crucial that faculty are on board.  (Not unanimously, of course, but broadly.)  Doing assessment stupidly -- say, as an add-on to grading -- will defeat that purpose.  Faculty need to be able to raise the difficult questions around how a given assessment mechanism fits with what’s actually being taught.  The idea isn’t to allow a sort of plebiscitary veto -- that ship has sailed -- but to make sure it’s done in an intelligent and productive way.  If it’s presented as the latest variation on Soviet-style production numbers, then it will be about that reliable.  But if it’s designed openly -- that is to say, if administrators are willing to cede a considerable amount of control over the specifics -- then it can actually accomplish something worth accomplishing.

Comments:
Faculty buy-in is essential. It is not enough to give faculty control over the process or of the details of assessment, if they see the whole exercise as a pointless bureaucratic imposition.

I have to do "assessment for program learning outcomes" for an interdisciplinary program, while the entire faculty of the multiple departments involved sees the "program learning outcomes" as bureaucratic makework (an attitude I'm afraid I share, since they are not at a detailed enough level to actually provide useful feedback).
 
Any administrator who starts out with that "telling them what to do" approach is begging for failure. IMHO, the key is telling them what you are not going to do, which is have to use a single national standardized test that must be passed to satisfy the College History general education requirement. Instead, we develop our own set of outcomes that match up with program-level outcomes and develop and test assessments for those outcomes.

Done right, you learn a heck of a lot more than you get just looking at grade distributions or hoping you happen to remember which problems were missed by a lot of students. And since part of the process is to reconsider the choice of Outcomes and/or look at how they fit into the gen ed distributional requirements, it has value over the long haul as well.

The first year or two is a bitch, however, adding many hours of uncompensated time to my workload.

I like your history or poli sci examples. The faculty should be talking to each other! If they have actually thought about the subject they teach, they should be able to identify higher level, broader categories that each class is covering in its own way and that will be assessed in its own, equivalent but not identical, way.
 
Good initiative taken by author.
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?