Wednesday, December 05, 2007

 

Reading Evaluations

A new correspondent -- and apparently the kind of student we'd all like to have -- writes:

As a student, I have gotten a lot more careful over the years with how I
fill out student evaluations, because I know more about what they mean
for the instructors. I've read a lot of complaints on academic blogs
that students do not carefully fill out the evaluations and that their
criticism is sometimes unfair. I try hard to be both constructive and
fair. I almost always include written comments, unless I've got
absolutely nothing to say.

My question relates to how the evaluations are used by administrators.
The form usually asks for separate written comments addressing such
things as what could be improved about the course, how useful the
readings and assignments were, etc. Thinking that the feedback could be
useful to the instructors -- especially brand-new, inexperienced ones --
I've always answered them honestly, identifying both strengths and
weaknesses.

But I find myself wondering: if the way evaluations are used is as
abusive as some academic bloggers feel it is, maybe I should just opt
out, or bubble in all 5's and leave it at that.

If I give an instructor generally high numerical marks, and note in the
written comments that the instructor's assignments are clearly written,
obviously well thought out, and very helpful; the readings are
informative; and the class exposed me to concepts I hadn't been familiar
with but which were very important -- but that his lectures are somewhat
confusing due to disorganization and a lack of "signposts" -- have I
just hosed someone's (by all indications very promising) career by
pointing out a weakness?

Likewise, if I give extremely low ratings to a department-designed
cookie-cutter course, but not to the instructor herself, and make it
very clear in the written comments that it's the course itself I feel is
valueless, is the instructor going to be penalized for this?

If there's anything I've learned about course evaluations in my time blogging, it's that different administrators (and different colleges) treat them differently. I'll discuss how I treat them, and how I've seen them treated, and how I think they ought to be treated. It's entirely possible that not everybody treats them as thoughtfully as they should.

The evaluations we use separate questions into three categories: numerical about the instructor, numerical about the course, and open-ended.

When considering applications for promotion, we look at the first and third categories. The second category is mostly ignored. If I had my druthers, we'd include data from the second category in outcomes assessment exercises -- in which we look at the curriculum, rather than the instructors -- but we're not there yet. Maybe someday.

The first category lends itself easily to a single summary score, which is reported up the chain. The single summary score is relatively unhelpful for most faculty, but it does help you spot outliers. If one professor is coming in several standard deviations below everybody else -- it happens -- that's a red flag. It's not dispositive by itself, but it suggests taking a closer look.

The open-ended answers take the longest to read, but are by far the most useful. Some of them, I'll admit, I just tune out: "professor was mean she wouldn't let me do extra credit." Good for her. "Too much work." If everyone says that, I'd take a look; if a few do, I write it off to standard student griping. Some are just mean -- comments about clothes or hair or accents. And some are just inappropriate -- "she's hot!" Thanks for sharing.

The ones I've seen that have piqued my attention are the ones like "he takes two months to grade papers." That's a specific complaint about something relevant that is usually within the professor's control. If that one pops up a lot, I check it out. If it's true, then that's something the professor needs to address. I recall one professor at PU whose students commented, almost uniformly, "he changes the rules every week. We never know what the deadlines are." To me, that's a serious charge. Similarly, anything like "professor is often very late to class" or "professor misses class a lot" is a bright red flag. If those are true, then we have a very real issue.

The comment about signposts in lectures wouldn't really register with me either way; I'd read that as intended for the prof, rather than for me.

All of that said, I've heard of some people with 'bright line' rules about numerical scores, and of adjuncts being non-renewed for trivial complaints. I've never done that, and I've never seen it done at either of the colleges at which I've worked in administration. But I can't say it has never happened anywhere.

I've noticed a strong 'halo effect' in the numerical questions. If the students like the professor, they'll forgive many flaws. If they don't, they'll nitpick every little thing. An evaluation in which the student draws a straight line down the 'poor' column doesn't suggest 'bad professor' so much as 'disgruntled student.' An evaluation with mostly 'fair's and 'poor's, somewhat interspersed, is actually far more damning, since it suggests actual thought.

To my mind, student evaluations should be an element in evaluating teaching, but they can't be the only element. Some classes will consistently 'score' relatively low, no matter who teaches them. (Remedial classes almost always score low, as do required math classes for humanities majors.) And it's certainly true that most of us with any kind of experience have had some classes that just 'clicked' and some that just didn't. My standard for myself as a teacher was that on good days, I should be really good, and on bad days, I should be at least professional. Any professor who claims never to have had a lesson fall flat just isn't very self-aware.

All of that said, I'll fall back on the rule I use when I write up class observations: write what you saw. Ultimately, you can't control how it's read or used, and you don't control your professor's career. If your professor has an idiot dean or department chair or provost, that's beyond your control. If your comments are honest and thoughtful and constructive, you're doing your part.

I'm almost afraid to ask, but...wise and worldly readers, what do you think? What have you seen?

Have a question? Ask the Administrator at deandad (at) gmail (dot) com.


Comments:
Perhaps students wonder if anyone actually reads the evaluations and if they do, are any apparent actions ever taken as a result. Perhaps this is why you get responses like "he/she is hot".

In my most recent experience, there were some serious flaws with the program and specific issues with certain faculty that remained constant from semester to semester, therefore it seemed student evaluations went into one big black hole never to be seen again.
 
I actually use 2 evaluations for every class I teach: the standard form, and then my own that has 5 open-ended questions. Since six months generally pass before I see the university evaluations, my own form is used to overhaul the class at the time it concludes (and everything is still fresh in my brain). I've found my own form to be very helpful.

The university form is linked to P&T--they are carefully scrutinized, particularly when someone is up for tenure. Post-tenure, they're used to flag problems (like not having any for your class, semester after semester).

Generally, I ask students to spend time on both, stressing that the two forms have different purposes. I also am sure to disappear as they start to fill them out, asking a student to deliver the university forms to the departmental chair, and mine under my office door.

BTW: My doctoral mentor started me using this system, over 15 years ago as a method of "quality control." It's been a really helpful tool.
 
My departments (yes plural, I'm an adjunct) make no bones about the fact that they use the evals to decide on whether to re-hire the instructor. They're not interested in making fine distinctions or wading through ambiguity. Good = good; bad = bad.

Most times this isn't a problem, but every once in a while if I get a dud group of students -- it does happen -- that I can't wake up, I do worry. When this has happened, I do some preparatory work and try to make it known to the powers that be that I have a bad group of students in that particular section.
 
I know of cases in which a specific allegation on an evaluation was cited when an adjunct asked to know why the college suddenly couldn't find any courses for him the following semester. A student had claimed the instructor had come late several times on the form, and, without discussing it with the instructor, the administration decided that he was more trouble than he was worth, whether it was true or not. Perhaps this sort of thing is unavoidable, but adjuncts must take care to develop a good relationship with their departments, so that at least they might do you the courtesy of letting you defend yourself.

As for advice for the thoughtful student evaluator, I am also aware that some colleges do not include in an instructor's statistics any evaluation that fills the bubbles all in one column, whether those are all 1's or all 5's. I have made a habit of reminding my students not to do this, as, even if they totally loved or hated me, their opinion might not be counted unless they show a little variation.
 
Six months before you see the evals from the past semester? Not very helpful if you're teaching the same course semester after semester, is it?

One of the biggest problems with course evals is that, as DD's thoughts and other comments suggest, there are at least two audiences: P&T committees and the instructor. The interests of those two are not identical.

If evals are to be of benefit to the instructor, prompt return is essential as are questions that address specific elements of the course. The supplement that 447am uses is a great idea, as it can be tuned to the course itself and can bypass the usual bureaucratic delays in return.

I also think that DD's point about numerical summaries is right on and is much the way I have used them: they are useful in spotting outliers. Reading evals for patterns of comments, both good and bad, also reflects my own practice as an administrator.

Question: do any folks use mid-semester course evals of their own to get some feedback in mid-stream? I've found that to be helpful when instructors are aware that they are struggling, or when I've heard a pattern of student concerns about a course, it's a suggestion I make to the instructor.
 
As a tenure-track asst. prof., at my institution we are expected to include sample evaluations in our materials for reappointment, promotion, and tenure. I have been advised that if there is just one "negative" comment (even if constructive) not to include what would otherwise have been a stellar evaluation. If the numbers are low - whatever the comments the student leaves - then that may mean I don't get a raise that meets cost of living increases for the following year (we have merit pay). The only evaluations that are helpful to a professor, at my institution, are ones with only positive comments and only high numerical scores. Period.

As for the value of evaluations in actually considering how to improve my own teaching, well, my response is mixed. Sometimes there's something of use, but more often than not there isn't. And I know many of my tenured colleagues make a practice of ignoring the university-administered evaluations. This isn't to say that they don't care about student feedback, many of them do, but after they get tenure, they abandon the university forms because they are so often useless in evaluating how a class actually went.
 
I am only in my third year of teaching, so I am decidedly less experienced than the rest of you, but I think that most of the people at my institution (or, at least those I have contact with) agree wholeheartedly with Dean Dad's breakdown. I myself, as the instructor, generally disregard the couple people who say there is too much reading (history courses require a lot of reading...deal with it), or the two people who really hated the digital assignment I assigned last semester. But, I would definitely pay attention to things like "her lectures were hard to follow" or "she talks too fast." I would pay more attention to these because I know my weaknesses: I do talk fast, which may make it difficult to follow. Anyway, I always get a kick out of evals, since you can easily pick out the vindictive students...last semester one even called me a "bitch." Which, if you know me, is not how I roll.

Anyway, I thought this was a great post!
 
My experience as an adjunct, and that of my far-flung colleagues, has been the same for years. Five-star evals are routinely ignored. Deans and chairs only perk up when students (i.e., consumers) bark and whine. If you want to understand how my class functions, please drop by. I'm always open for business.

Why you insist on staking my future on the opinion of some kid who still has the dry-cleaning receipt for the tux he rented from prom night kicking around in the back seat of Mom's Honda is beyond me. And I say this as one with a file cabinet full of gold stars from these students.

As far as returning student work, two months is inexcusable. Getting through a stack of 160+ essays for 8+ classes of students who demand instant gratification is another matter.
 
At my institution, tenure-track folk (I can't speak to the adjuncts) really just need reasonably good scores, and preferrably a generally upward trend over time. In our annual faculty reports we list our numerical averages, for each course, in four main categories--as well as the course enrollment and grade spread--and are encouraged to add interpretative commentary pointing out that, say, our score for "instructor's contribution to the course" was consistently our highest scores (which is important in survey/gen ed/required classes). We're also encouraged to include representative written comments.

We also have merit pay raises at my university (in addition to a guaranteed cost-of-living increase), but they can be applied for in teaching, research, or service (but not all three). To apply in teaching you do need superlative evaluation scores, as well as other notable achievements. . . but to apply in the other categories you just need to be performing as a teacher at a reasonably strong level.
 
"Why you insist on staking my future on the opinion of some kid who still has the dry-cleaning receipt for the tux he rented from prom night kicking around in the back seat of Mom's Honda is beyond me."

Well...hmm. While I don't think that student evals should be the only way teaching is assessed, I certainly see the value. You're trying to reach that kid, connect with him, teach him something. And if you aren't then you aren't doing some part of your job.

Now, that's not to say student evals are the last word. We've all had teacher who taught us something we realized later was really valuable but at the time hated. Or maybe learned something from a teacher we didn't especially like and were therefore inclined to give lower scores to.

At the same time, students are the audience for you teaching. What they say ought to count for something. Just as long as it isn't everything.
 
As both a student and an instructor, I give a lot of thought to filling out and receiving evaluations. The most important thing a student can do if s/he wonders how to fill the thing out, is to ask! Ask how the evaluations are used. Who receives them? What effect do they have? Instructors will tell you (at least the official statement). At both of the research universities of which I have been a part, numerical summaries are reported to the department chair, written comments are sent directly to the instructor. Many a negative written comment has landed in the recycling bin with no record. This behavior is a bit pointless, as no administrator ever sees the comments anyway.

The scantron forms actually do matter. And at my universities, we don't receive every scantron--we receive numerical summaries. Importantly, this means that an individual who bubbled all "excellent" or all "very poor" is aggregated in with the masses. All received forms are tabulated (none are thrown out, ever), and you can't tell how many forms may have been filled out by bubbling only one column.

Of course, if your numerical summary is all "excellent" for every student for every item, I figure you pretty much rock--statistically utility be damned!
 
I urge my students (and I would urge your thoughtful student correspondent) to contact me directly, after they receive their grade if they prefer, to talk to me about the course and help me improve it. My students are pretty confident in the fact that disagreeing with me doesn't get them in any trouble by the second week of class, so I usually have a couple who take me up on my request, and that's really helpful for me (in addition to the feedback on the forms, which I do take seriously).

I try to be very transparent with my students about everything from why I choose the types of assessments I do to why it takes me X amount of time to grade things. Since my students generally understand my rationale and goals, I often get very good feedback from them, especially in person. (The ones who bother to talk to me in person are usually the ones invested in the learning process, so they're not giving unhelpful advice.)

I did have an experience, teaching in a seminar sort of setting where we taught high school students THROUGH the CC, where I got universally glowing reviews from students, highest in the program, but one student and his friend decided it would be funny to give me shit evaluations for their entertainment (and told me, quite nastily, they were doing so, because they were angry their parents made them take the course). I didn't really think anything of it -- some high school kids are jerks -- until I got a very nasty, ugly, unprofessional e-mail fired at me from the director of the program telling me that my "behavior" was totally out of line and she was appalled and upset and I was not the sort of person who should be working with students and so on and so forth. I was given NO opportunity to defend myself, it apparently did not occur to her that 2 bad evaluations making specific and large accusations set against 200 stellar evaluations that didn't mention them might raise a red flag, nor did it occur to her that the teachers observing the program would probably have mentioned it, and I have not been invited back to the program.

It honestly still upsets me a little bit to think about it.
 
do any folks use mid-semester course evals of their own to get some feedback in mid-stream? I've found that to be helpful...

I've done this; one of my own, with very open questions. It was for a course that was in a lot of ways very difficult to prep for -- it was a brand new science breadth course for visual arts students (scheduled, by the way, for one three hour section a week Friday afternoons 1-4pm. Admire the demented genius of that particular piece of educational design.)

I had my own pre-, and mid-term, and a post- questionnaire, and I did make decisions based on the results of all of those. An unexpected outcome of the tests, particularly the mid-term one, was that the students really seemed appreciative that I was really trying to make mid-course corrections and was taking their input seriously.
 
Some professors I've had have instituted a mid-term KQS evaluation. KQS= Keep doing x, Quit doing y, Start doing z.

While I specifically note the wonderful things that certain professors do, I rarely write constructive criticism unless I am in a large class. I think my writing style (word choice, syntax, etc) would give me away, and in my field, I end up taking multiple courses with the same faculty members. I wouldn't expect negative repercussions, but I'd rather remain truly anonymous just the same.
 
Here's a side comment. For faculty and students out there commenting in web land do any of you have a question on evaluation forms that frequently produces inaccurate or incorrect results? If so, is there anything that you've done to try to change this like asking the appropriate campus committee to consider revising the evaluation form? Does this revision ever happen?
Example: on our students' scantron course evaluations one of the questions is something like 'Faculty member was in office during his or her scheduled office hours.' Faculty across campus (including myself) often get very bleh evaluations in this category because I honestly think students really don't seem to understand what this question is asking. I know it shouldn't, but it really bothers me that I consistently get average scores in this category by doing very well in other categories. All faculty and administrators seem nonplussed by the question but I just wonder why we're asking students something which produces false results.

To try to solve this problem on my own, a few days before students do their scantron evaluations (these are administered by a staff member from an outside office) I make sure to directly review a few parts of my syllabus and, in particular, emphasize my keen willingness to meet with students during office hours or work with students during individual appointments that are above and beyond my regularly scheduled office hours. Despite this, I still get bleh scores. It seems clear that students interpret the question as 'was faculty member in his/her office at all times ready to assist me whenever I had any issue remotely connected to the course or even not?'

I can see that in the wrong situation with an overly fastidious department chair, angry program director, someone that didn't know that scores are generally off for this category this being an issue for faculty on our campus. Moreover, the results of this question are often even more negative for adjunct faculty who, because of a lack of office hours location, frequently have office hours in what could only nominally be called 'offices.' As an example, adjunct faculty in one department hold their office hours in a maintenance closet which is labeled "Maintenance Closet". Sometimes students have a hard time finding this office or are confused by the label for obvious reasons although I kind of like the idea of treating office hours as a chance to have 'maintenance' done, actually. But that's another post.
 
We are in the midst of revising our campus-wide evaluation form. It is at least the second time we've done so in the 18 years I've been here.

In the specific program I administered, we have a 22 question scantron form for the fall course. Each fall I explored changing questions and changed or tweaked several in the last 6 years.
 
Great post, Dean Dad. I wish all administrators were as careful, conscientious, and thoughtful as you.

But, as eyebrows mcgee's comments show, they're not.

A problem with student evaluations at my CC is that they're way too high. On a 1-4 scale where 1 is excellent and 4 is terrible, the college-wide average is 1.2. I've actually had administrators and tenure review committees come after faculty members because their scores of 1.5 or 1.6 were "below average."

--Philip
 
I used to try to be funny. I think I once said that Professor Mean was always available in his office because he slept in the coffin under his desk.
 
I wonder if anyone tried to combine the best of numeric and free-response evaluations. It should be possible to identify, say, 50 specific, often-made claims like "takes more than a month to grade papers", "changes the policies often", or "is often late to class" (including positive comments too). Students can put a checkmark next to the claims they support, and this data is automatically processed into a histogram or some other chart. This should not be hard to implement, especially if evaluations are collected online.
 
Copied from my blog, regarding a course eval from Spring 2007:

From an anonymous student course evaluation:

"teach better, expect less, grade fairly"


Translation for those not familiar with undergraduates:


"teach better" =

"You did not teach directly from the book [which I didn't read], so when you actually expected me to know terms and ideas presented there, I felt and looked stupid. And even though you explained every assignment and made yourself available for office hours and e-mail questions, I never bothered to avail myself of those extra teaching opportunities because I thought I knew everything already. You are supposed to stand in front of the class everyday and keep me entertained while you teach me stuff I think isn't important. Don't you know that?"

"expect less" =

"Even though this class is a writing course, expecting students to write 3 short papers and one medium-length research paper is JUST. TOO. MUCH. WORK. And those in-class writing assignments designed to re-enforce the lesson just taught earlier in the week were JUST. TOO. MUCH. WORK. And you know your requirement that I attend to page length, double-spacing requirements, and proper margins? It's JUST. TOO. HARD to make sure all that is correct before I submit a paper. Oh, and proofreading... Are you kidding? That would require I actually pick up a dictionary or style manual, much like the one required for the course that I never bothered to read &/or purchase."

"grade fairly" =

"You should know you should just give me a B for showing up and being bored, and if I even put forth any effort, that means an A! It's not fair! My other teachers, whom I shall never name, told me I right good! Who are you to tell me I have problems? You're so arrogant and smug, standing up there with all that knowledge and experience! You're not better than me! I asked the 3 people I sit near, and different people got different grades, so that means you're just not grading fairly! I never asked you to explain why I didn't earn a higher grade, so you obviously just hate me! And my little dog too!"


Needless to say, I've quit adjuncting...but I miss teaching.
 
I'm just reminded of the comment from someone (it might have been Bitch PhD) that having students evaluate teaching is like having children evaluate parenting. You can get useful information, but only through some heavy textual analysis.
 
That was me. :-) Thanks for answering my question, Dean Dad, and thanks everyone else for the comments. I feel more enlightened than I was.

I agree with db's comment immediately above. I've been a student longer than most undergraduates, and I pay attention to what works for me and what doesn't, so I like to think I have something of value to say. However, I'd be horrified if my comments were not taken with a grain of salt at least.

This is not least because students often have different goals for a given course -- different from each other, different from the goals the instructors have for them.

For example, this semester I had a gen-ed science requirement and an upper level history seminar. My goal for the first was to pass the damn thing with a B+ or better. My goal for the second was to write the best history paper I've ever written. If the science course had been at all demanding, you might have found me complaining that it was "too much work" (although perhaps not on the evaluation form). Context matters: if it took too much time away from my history thesis, then it was "too much work."

Maybe it would be useful to ask on evaluation forms what the student expected to get out of the course, or what their goals were for the course. If you could get honest answers, it would be a useful clue in how to interpret the other answers.
 
where I went to undergrad the dept had supplemental questions to go with the evaluations. The evals also asked how many hours you spent weekly on the course, how often you came to class and the grade you expected. Where I teach, we don't ask that. I wish we did. I haven't found evaluations helpful (I got last spring semesters in October, not useful when I'm teaching the same class) mainly because there are so many questions I have little to do with "Did instructor effectively use class time?" Well that could mean a lot of things "Is instructor a good teacher?" Again, subjective. Also, I teach a required course, everyone has to take and no one wants to. SO when asked if they would recommend this course to another student...no! Many have said that our evaluations don't accurately measure student learning and teaching effectiveness, but all our VPAA said was that we need to address negative evals in our yearly reviews
 
My supervisor at the teaching resources center where I work recently told me that "course evaluations tell you more about students than they do about instructors." This is especially true of the mid-quarter interview I help to conduct here upon instructor request. The responses are disheartening.

For example, in a recent mid-quarter interview with one class, students told me to convey to their instructor that he shouldn't ask them questions when he already knows the answers: "Just tell us the answer and move on." Students, in other words, want to be entirely spoon-fed. Others express the belief that instructor of x ethnicity shouldn't be teaching a class on y ethnic cultures. Helpful: Professor X, would you kindly change your race and abandon your 20 years of research in y?

So when the end-of-quarter evaluations (the ones that go on the record) roll around, students are evaluating the instructor not so much on whether he's actually teaching well (e.g. using interactive learning that promotes collaboration and critical thinking skills), but on whether he's teaching them in a way they feel is best (spoon-feeding) or if his ethnic background gives him enough authority to teach a course.

It's all very frustrating, so we're looking at developing alternative forms of evaluation, ones that actually measure student learning in the course rather than the instructor's personality or students' (often very skewed) perceptions about what constitutes good teaching.
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?