I'm interviewing for an interesting non-faculty position soon, and
before I go in to the interview, I'd like to get your/your readers'
takes on it. The position is "Assessment Coordinator" for a relatively
small (specialized) university. The job description looks something like
this: 1. development and implementation of an assessment program; 2.
conducting quantitative and qualitative assessments for campus wide
assessment; 3. assisting program chairs with developing departmental
assessments; and 4. facilitating the administration and analysis of
internal and external surveys.
Now, I have a Master's in Sociology, so I know how to develop surveys
and assessments in a variety of settings, and analyze the results. I
also have experience in medical research and working for hospitals, so
I can handle the medical aspect. However, I'd like to get a
specifically higher-ed perspective. I know when I was in grad school,
faculty HATED the student assessments and students didn't take them
seriously, making the validity and effectiveness very low. I'm sure
similar displeasure exists with some institution-wide assessment
methods and other assessments within the departments, though I did not
really encounter these personally as a student. I personally believe
that failure to involve faculty and other stakeholders at the
institution in the *design* of assessments is a major reason for this
dissatisfaction, as well as just poor understanding of survey and
research design, and low institutional investment in really
understanding the results of data and acting upon the results.
What would you, as a dean and a former faculty member, like to see in
the assessment methodologies your CC uses? Are there any methods
you've found that really work, or are appreciated by faculty/students?
Ideas I can pipe up with in my interview, as possible options for this
institution? I am not afraid to be perceived as the bad guy, since
finding out all the problems and bringing them into the light is
rarely popular, but I optimistically hope that if an assessment was
valid, and really captured the good and bad of a university system,
and gave opportunities for people to get their grievances out there,
stakeholders would get behind it. Since I am just getting ready for
the interview, I don't really know what known problems or issues this
institution has yet, but some general input would be welcome.
This won't be easy.
There's a difference between the kinds of assessments students do of classes at the end of the semester – usually called 'evaluations' or something close to it – and 'outcomes assessment.' (Your reference to 'student assessments' is ambiguous.) Students' course evaluations are the sort of Siskel/Ebert thumbs up or down on a given professor and/or class. Professors are supposed to loathe and ignore the student evaluations, although many secretly cackle when they themselves do well. (Honestly, I did the same thing.) The strange paradox of student course evaluations is that for such crude and badly administered instruments, they tend broadly to get it right. (Ratemyprofessors does that, too. I have no idea how it happens, but it's generally pretty close.) Yes, they tend to over-reward attractiveness and sense of humor, and under-reward clarity, but the margins of error (at least in my observation) are usually pretty small.
Outcomes assessment is a very different animal. It's about measuring student achievement, rather than student opinion, and the point is to focus on curriculum, rather than personnel. It's largely driven by mandates from regional accrediting agencies, and it's usually unpopular with faculty.
Broadly, the idea of outcomes assessment is to see what students are capable of doing after completing a given course or program of study. It's different from grading, which is the usual first line of attack. (“We already assess. We give grades!”) For example, in my days at Proprietary U, we noticed that even students who had attained good grades from the start were often clumsy public speakers, incapable of giving effective presentations. As the faculty discussed that, we found that although we had emphasized writing a great deal, we had done little or nothing to teach effective speaking throughout the entire curriculum. That's how a student could get good grades and still fall short on a key outcome – simply put, there was a hole in the curriculum. Once we figured that out, we adopted a number of measures to improve students' ability to do presentations.
Outcomes assessment carries with it any number of negative associations for faculty. It's extra work, it's often ignored at budget time, and to many (sometimes correctly, sometimes not) it smacks of standardization. Bring it up among faculty, and mere nanoseconds will pass before someone mentions No Child Left Behind, teaching to the test, and Wal-Mart. It can also be jargon-laden and opaque, if not obtuse, which doesn't help. For a general outline of the Kubler-Ross stages of outcomes assessment, see here.
If the job is about outcomes assessment, rather than student course evaluations, I'd strongly suggest going in with discussions of two major issues: addressing the roots of faculty resistance, and using the findings to close the loop and actually improve program delivery. They're really both about communication; if you can frame assessment in ways that don't trigger unhelpful reactions, you might actually get some useful data. Then if you can convince people to actually use that data to improve their programs, you've really got something.
Good luck!
Wise and worldly readers – what do you think?
Have a question? Ask the Administrator at deandad (at) gmail (dot) com.