Wednesday, July 16, 2008

 

Ask the Administrator: Selling Assessment

A new correspondent writes:

I'm interviewing for an interesting non-faculty position soon, and
before I go in to the interview, I'd like to get your/your readers'
takes on it. The position is "Assessment Coordinator" for a relatively
small (specialized) university. The job description looks something like
this:  1. development and implementation of an assessment program; 2.
conducting quantitative and qualitative assessments for campus wide
assessment; 3. assisting program chairs with developing departmental
assessments; and 4. facilitating the administration and analysis of
internal and external surveys.

Now, I have a Master's in Sociology, so I know how to develop surveys
and assessments in a variety of settings, and analyze the results. I
also have experience in medical research and working for hospitals, so
I can handle the medical aspect. However, I'd like to get a
specifically higher-ed perspective. I know when I was in grad school,
faculty HATED the student assessments and students didn't take them
seriously, making the validity and effectiveness very low. I'm sure
similar displeasure exists with some institution-wide assessment
methods and other assessments within the departments, though I did not
really encounter these personally as a student. I personally believe
that failure to involve faculty and other stakeholders at the
institution in the *design* of assessments is a major reason for this
dissatisfaction, as well as just poor understanding of survey and
research design, and low institutional investment in really
understanding the results of data and acting upon the results.

What would you, as a dean and a former faculty member, like to see in
the assessment methodologies your CC uses? Are there any methods
you've found that really work, or are appreciated by faculty/students?
Ideas I can pipe up with in my interview, as possible options for this
institution?  I am not afraid to be perceived as the bad guy, since
finding out all the problems and bringing them into the light is
rarely popular, but I optimistically hope that if an assessment was
valid, and really captured the good and bad of a university system,
and gave opportunities for people to get their grievances out there,
stakeholders would get behind it. Since I am just getting ready for
the interview, I don't really know what known problems or issues this
institution has yet, but some general input would be welcome.



This won't be easy.

There's a difference between the kinds of assessments students do of classes at the end of the semester – usually called 'evaluations' or something close to it – and 'outcomes assessment.' (Your reference to 'student assessments' is ambiguous.) Students' course evaluations are the sort of Siskel/Ebert thumbs up or down on a given professor and/or class. Professors are supposed to loathe and ignore the student evaluations, although many secretly cackle when they themselves do well. (Honestly, I did the same thing.) The strange paradox of student course evaluations is that for such crude and badly administered instruments, they tend broadly to get it right. (Ratemyprofessors does that, too. I have no idea how it happens, but it's generally pretty close.) Yes, they tend to over-reward attractiveness and sense of humor, and under-reward clarity, but the margins of error (at least in my observation) are usually pretty small.

Outcomes assessment is a very different animal. It's about measuring student achievement, rather than student opinion, and the point is to focus on curriculum, rather than personnel. It's largely driven by mandates from regional accrediting agencies, and it's usually unpopular with faculty.

Broadly, the idea of outcomes assessment is to see what students are capable of doing after completing a given course or program of study. It's different from grading, which is the usual first line of attack. (“We already assess. We give grades!”) For example, in my days at Proprietary U, we noticed that even students who had attained good grades from the start were often clumsy public speakers, incapable of giving effective presentations. As the faculty discussed that, we found that although we had emphasized writing a great deal, we had done little or nothing to teach effective speaking throughout the entire curriculum. That's how a student could get good grades and still fall short on a key outcome – simply put, there was a hole in the curriculum. Once we figured that out, we adopted a number of measures to improve students' ability to do presentations.

Outcomes assessment carries with it any number of negative associations for faculty. It's extra work, it's often ignored at budget time, and to many (sometimes correctly, sometimes not) it smacks of standardization. Bring it up among faculty, and mere nanoseconds will pass before someone mentions No Child Left Behind, teaching to the test, and Wal-Mart. It can also be jargon-laden and opaque, if not obtuse, which doesn't help. For a general outline of the Kubler-Ross stages of outcomes assessment, see here.

If the job is about outcomes assessment, rather than student course evaluations, I'd strongly suggest going in with discussions of two major issues: addressing the roots of faculty resistance, and using the findings to close the loop and actually improve program delivery. They're really both about communication; if you can frame assessment in ways that don't trigger unhelpful reactions, you might actually get some useful data. Then if you can convince people to actually use that data to improve their programs, you've really got something.

Good luck!

Wise and worldly readers – what do you think?

Have a question? Ask the Administrator at deandad (at) gmail (dot) com.

Comments:
DD is correct about selling to faculty and closing the loop. The important thing to understand is that those two go together --so, if you have ideas or experience in doing those things, be sure to bring them up.

Assessment Coordinator sounds, to me, like a job in which you'll be both trying to push rope and herd cats -- if you have those kinds of skills, good luck!
 
There are three key issues:

First, what do you expect to accomplish in your curriculum? If the goal is some institution-wide assessment tool(s), then the question is what the comman learning goals are. If the institution does not have such learning goals, you're up a creek from day one. Establishing those goals--concretely, meansuably--is perhaps the hardest part of the process. This cannot be done without faculty participation and support.

Second, you have to develop methods of measuring how well students have achieved those learning objectives. Here, there are two issues: (1) How well have students achieved in these areas? (2) How much of that is a result of the institution (value-added) and how much is a result of student preparation when they arrived? Both are relevant, both are hard.

Third, you need faculty and administrative buy-in. This will need to be created both top-down and bottom-up. I know of institutions that profess a commitment to assessment, but provide no resources (no real administrative buy-in) and others at which the faculty refuse to play.

If you can make this work, you'll have immense opportunities to move around--and up--in higher ed.
 
What do I think about the original question? I think this person has no idea what job s/he applied for! The only reason to hire an "Assessment Coordinator" is to create an outcomes assessment (OA) program, not fine tune something that already exists.

BTW, thanks for the pointer to your old article about assessment; that was back before I found your blog.

I don't think OA is entirely about curriculum, although I have recently identified one case near to my heart (trigonometry) where that plays a role. It also must not be done at the end of the semester, since that misses the point - which is to detect whether any real learning took place. OA must be done no earlier than the start of the next semester.

(I was involved, as a student, in a case-controlled teaching experiment where we were paid to take a final-like exam around the middle of the next semester to assess retention. I should get a copy of that paper!)

IMO, the failure to retain what was learned is systemic but can be attacked in the classroom. A good example was in Dr. Crazy's comment in a discussion about getting the very idea of a prerequisite across to students, whose results were summarized last month.
 
One other thing: Does the writer have institutional research skills? We get our best information by data mining for correlations in our student grade and post-transfer data. The latter could be re-cast into employer surveys for a specialized university if they have good alumni tracking, but the former requires real talent unless they can afford to hire someone who can hack through the data base for you.
 
I strongly suggest getting into the ERIC database through your library and researching assessment and higher education thoroughly. Being able to quote chapter and verse of the current research will be invaluable in helping you to figure all of this out for yourself.
 
Speaking specifically to classroom surveys -- I dislike them because they're so generalized. It's why I disliked them as a student, and it's part of why I dislike them as faculty. I understand that using the same assessment across the entire school creates a "standard" you can judge institution-wide, but using the same criteria to judge a chemistry professor and a poli sci professor always struck me as insane, and the questions were so broad as to be useless.

As faculty, we do have a departmental assessment tool in addition to the overbroad institution-wide tool, but it SO strongly reflects our department chair's personality, intellectual goals, and teaching style (confrontational) that I don't feel it's very helpful EITHER, since my teaching style is very different. (I don't feel like the rest of us are being set up to fail -- I think it's just a sort-of blindness to the issue and self-importance -- but I can easily see how that COULD be used to set junior faculty up to fail.)

What I as a professor would like is assistance in developing my OWN assessments more effectively, that I have students do anonymously at the end of the semester.

As for buy-in, as a student AND as faculty I would like quite a bit more information on how those assessments are used at the departmental level. As a student you generally have a pretty clear idea of which faculty pay attention to them and which don't, but no real idea if the department cares or how the department uses the data or whether 3s keep you on tenure track or whether 5s are demanded; whether comments matter; if there are key words looked for in comments ... (and so if you have nothing in particular to dislike about the prof, you circle all 5s because you don't want them penalized). And as faculty you get that sick feeling in the pit of your stomach when you get a particularly nasty and vindictive one, even if you know your department chair/dean KNOWS vindictive when they see it, just because you don't know who else sees it, who else reads it, who takes it seriously, and how those numbers/comments get SPECIFICALLY used. Transparency and openness would help quite a bit.
 
A couple of suggestions from a faculty member who has worn the "Assessment Czar" hat. First, avoid all mention of the following terms - "deliverables," "accountability," "value added," "continuous quality improvement," etc. In other words, avoid anything that sounds like corporate-speak in reference to assessment. Only use language that speaks to faculty in terms of their interest in helping students learn. Phrase it this way - "what do you want to know about student learning?"

Second, the distinction between summative and formative assessment is important to understand, and ultimately manipulate. Summative assessment (outcomes assessment that attempts to quantify student learning after the completion of a class or a program) is the "bad" assessment that smacks of standardization, measurement, and NCLB. Formative assessment (which provides information about student learning that can be used by faculty in the process of teaching) is something that is relatively easy to sell to faculty. See the end of paragraph #1 for how you do that.

Third, although it is tempting, do not use the approach of selling assessment as a fait accompli being imposed on faculty by (the feds)/(the state)/(the board of trustees)/(the accreditors)/(etc.) Too often, I see assessment folks who think that they can get out of the role of the bad guy by arguing that they are just trying to help avoid the more draconian version of assessment that will be imposed unless we come up with something. There's a lot of truth to that statement, but it does nothing to make faculty excited about participating.

Fourth, consider that this is one of the most difficult jobs you could take on. I would want to know a lot about the history of assessment at the school, accreditation reports on assessment, what the structure of assessment efforts are, etc.
 
Well, thank you all for the good advice. I do believe this would be a difficult position, and from my discussions with the school's staff so far, it sounds like the job is intended to be all of the above: outcomes assessment, student evaluations, and some more far afield things, like measuring criteria for accreditation. Systems for all of these *apparently* exist in some form at the school, but how effective and useful they are at this time is in question.

I will definitely hit the literature and see what the 'new hotness' as far as higher ed research methodologies is. Clearly, outcomes assessment is a bit more beastly than I had envisioned, and I'll be sure to nail down exactly what they currently do/want to do in that realm in my interview.

Oh, and ccphysicist, I do have some institutional research skills, as well as clinical and K-12 ed research experience. Sounds as though I'd need to use every bit of all of that in this position, if I end up in it.
 
At my old CC, "assesment" meant the tests you take before you got to register for classes. We had a Reading Comprehension, English, and Mathematics assesment test, and that allowed the college to tell us which classes we could take, or whether we had tested out of that portion.

So for instance by taking the Mathematics assesment, you could either place in Math 51 (remedial math), Math 100 (beginning algebra), Math 120 (intermediate algebra), or Math 300 (calculus or trigonometry, I think).

It was at the Assesment Center that you qualified for DSPS or
EOP&S (Extended Opportunity Programs & Services, for disadvantaged or at-risk students) as well.

We had an Assesment Coordinator who looked over your results and determined your placement. I'm sure she did other things as well.
I think, IIRC, her office was the one who did the student surveys ad interpreted the data.

Could *this* be what the person is asking about?
 
Original questioner mentioned that the job might include "some more far afield things, like measuring criteria for accreditation".

(Antenna go up)

Are they due for "reaffirmation" in about 2 years or so? (Easy to find out.) If so, that might be your full time job for the next two years, and relevant experience might be what will win you the job.

The new criteria are all about assessing your current state and identifying an on-going improvement plan with documentation along the way. And then three to four years in you have to document progress on a set of intermediate goals, making accreditation into an ongoing (rather than every 7 year) task.
 
The fact that an institution might consider hiring someone with a masters in sociology to run assessment is highly correlated with why faculty consider it to be such a joke.
 
Agreed.

"Rational" (erealitgy based) "Assessment" begs two questions:

1) Are your customers defined as your studcents; or the firms that hire them?

2) 30-day Placement rate and associated salaries are hte only "Assessment" you need. If your graduates aren't valued by the real world, yiou are cheating them, and cheating society as a whole.

"Assessment" ain't complicated- unless you have a non-rational reality framework.

Sheesh.

[p.s. that simple compound metric- placement on time and on salary- can drive an awful lot of very useful secondary level stuff the "assessment office" can "ob(as)sess over . . .
 
ccphysicist- that thought had crossed my mind as well. They didn't make a big thing about the accreditation in my conversations with them to date, but I know that is a huge job and I'm going to find out exactly how much of it would be on this position.

crankystudentnurse- No, that is not at all related to this job to my knowledge.

Anonymous- perhaps you'd be well advised to go learn what a sociologist does and come back later.

Confused Professor- The system this college wants is not simply to evaluate the students' earning potential. They already know that job placement and earnings data are fine, they have that information already. It's more the internal aspects of the institution that they want assessed, from what I understand. Assessment is decidedly not simple, hence DD's comment about how difficult these things can be.
 
Another new commenter here, dovetailing two recent posts, this one and the "double-dipping" post above. I'm the "SLO Coordinator" at my college, for which I receive 40% release time (or two classes) -- last semester, I taught one of those two released classes as an overload, which I felt slightly crummy about, so probably won't do it again.

But I very much want to say to the earnest "new correspondent" that he or she might really be underestimating the extent of faculty resistance to learning outcomes. Some people on my campus (or in the union) see them as "academic Taylorism," or a business-accountability measure put over education; others see them as an unfunded mandate, and thus a bargainable issue, and others as an invasion of academic freedom. Finally, some others simply see them as useless, and the "flavor of the month" that will tell us nothing whatsoever about what actually happens in the classroom.

I knew most of those attitudes going in, but what surprised me was the passion and, in some cases, downright nastiness with which that resistance was expressed. I see some modest values in outcomes, even if they weren't mandated by our accrediting board, but I was very close to resigning this position because of the reception my efforts were getting. So I did want to issue that warning . . . also, do check out Classroom Assessment Techniques by Angelo and Cross, useful classroom practices for assessing outcomes that keeps the focus squarely on teaching and learning and how to improve them both.
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?