Wednesday, December 05, 2007
A new correspondent -- and apparently the kind of student we'd all like to have -- writes:
As a student, I have gotten a lot more careful over the years with how I
fill out student evaluations, because I know more about what they mean
for the instructors. I've read a lot of complaints on academic blogs
that students do not carefully fill out the evaluations and that their
criticism is sometimes unfair. I try hard to be both constructive and
fair. I almost always include written comments, unless I've got
absolutely nothing to say.
My question relates to how the evaluations are used by administrators.
The form usually asks for separate written comments addressing such
things as what could be improved about the course, how useful the
readings and assignments were, etc. Thinking that the feedback could be
useful to the instructors -- especially brand-new, inexperienced ones --
I've always answered them honestly, identifying both strengths and
But I find myself wondering: if the way evaluations are used is as
abusive as some academic bloggers feel it is, maybe I should just opt
out, or bubble in all 5's and leave it at that.
If I give an instructor generally high numerical marks, and note in the
written comments that the instructor's assignments are clearly written,
obviously well thought out, and very helpful; the readings are
informative; and the class exposed me to concepts I hadn't been familiar
with but which were very important -- but that his lectures are somewhat
confusing due to disorganization and a lack of "signposts" -- have I
just hosed someone's (by all indications very promising) career by
pointing out a weakness?
Likewise, if I give extremely low ratings to a department-designed
cookie-cutter course, but not to the instructor herself, and make it
very clear in the written comments that it's the course itself I feel is
valueless, is the instructor going to be penalized for this?
If there's anything I've learned about course evaluations in my time blogging, it's that different administrators (and different colleges) treat them differently. I'll discuss how I treat them, and how I've seen them treated, and how I think they ought to be treated. It's entirely possible that not everybody treats them as thoughtfully as they should.
The evaluations we use separate questions into three categories: numerical about the instructor, numerical about the course, and open-ended.
When considering applications for promotion, we look at the first and third categories. The second category is mostly ignored. If I had my druthers, we'd include data from the second category in outcomes assessment exercises -- in which we look at the curriculum, rather than the instructors -- but we're not there yet. Maybe someday.
The first category lends itself easily to a single summary score, which is reported up the chain. The single summary score is relatively unhelpful for most faculty, but it does help you spot outliers. If one professor is coming in several standard deviations below everybody else -- it happens -- that's a red flag. It's not dispositive by itself, but it suggests taking a closer look.
The open-ended answers take the longest to read, but are by far the most useful. Some of them, I'll admit, I just tune out: "professor was mean she wouldn't let me do extra credit." Good for her. "Too much work." If everyone says that, I'd take a look; if a few do, I write it off to standard student griping. Some are just mean -- comments about clothes or hair or accents. And some are just inappropriate -- "she's hot!" Thanks for sharing.
The ones I've seen that have piqued my attention are the ones like "he takes two months to grade papers." That's a specific complaint about something relevant that is usually within the professor's control. If that one pops up a lot, I check it out. If it's true, then that's something the professor needs to address. I recall one professor at PU whose students commented, almost uniformly, "he changes the rules every week. We never know what the deadlines are." To me, that's a serious charge. Similarly, anything like "professor is often very late to class" or "professor misses class a lot" is a bright red flag. If those are true, then we have a very real issue.
The comment about signposts in lectures wouldn't really register with me either way; I'd read that as intended for the prof, rather than for me.
All of that said, I've heard of some people with 'bright line' rules about numerical scores, and of adjuncts being non-renewed for trivial complaints. I've never done that, and I've never seen it done at either of the colleges at which I've worked in administration. But I can't say it has never happened anywhere.
I've noticed a strong 'halo effect' in the numerical questions. If the students like the professor, they'll forgive many flaws. If they don't, they'll nitpick every little thing. An evaluation in which the student draws a straight line down the 'poor' column doesn't suggest 'bad professor' so much as 'disgruntled student.' An evaluation with mostly 'fair's and 'poor's, somewhat interspersed, is actually far more damning, since it suggests actual thought.
To my mind, student evaluations should be an element in evaluating teaching, but they can't be the only element. Some classes will consistently 'score' relatively low, no matter who teaches them. (Remedial classes almost always score low, as do required math classes for humanities majors.) And it's certainly true that most of us with any kind of experience have had some classes that just 'clicked' and some that just didn't. My standard for myself as a teacher was that on good days, I should be really good, and on bad days, I should be at least professional. Any professor who claims never to have had a lesson fall flat just isn't very self-aware.
All of that said, I'll fall back on the rule I use when I write up class observations: write what you saw. Ultimately, you can't control how it's read or used, and you don't control your professor's career. If your professor has an idiot dean or department chair or provost, that's beyond your control. If your comments are honest and thoughtful and constructive, you're doing your part.
I'm almost afraid to ask, but...wise and worldly readers, what do you think? What have you seen?
Have a question? Ask the Administrator at deandad (at) gmail (dot) com.
In my most recent experience, there were some serious flaws with the program and specific issues with certain faculty that remained constant from semester to semester, therefore it seemed student evaluations went into one big black hole never to be seen again.
The university form is linked to P&T--they are carefully scrutinized, particularly when someone is up for tenure. Post-tenure, they're used to flag problems (like not having any for your class, semester after semester).
Generally, I ask students to spend time on both, stressing that the two forms have different purposes. I also am sure to disappear as they start to fill them out, asking a student to deliver the university forms to the departmental chair, and mine under my office door.
BTW: My doctoral mentor started me using this system, over 15 years ago as a method of "quality control." It's been a really helpful tool.
Most times this isn't a problem, but every once in a while if I get a dud group of students -- it does happen -- that I can't wake up, I do worry. When this has happened, I do some preparatory work and try to make it known to the powers that be that I have a bad group of students in that particular section.
As for advice for the thoughtful student evaluator, I am also aware that some colleges do not include in an instructor's statistics any evaluation that fills the bubbles all in one column, whether those are all 1's or all 5's. I have made a habit of reminding my students not to do this, as, even if they totally loved or hated me, their opinion might not be counted unless they show a little variation.
One of the biggest problems with course evals is that, as DD's thoughts and other comments suggest, there are at least two audiences: P&T committees and the instructor. The interests of those two are not identical.
If evals are to be of benefit to the instructor, prompt return is essential as are questions that address specific elements of the course. The supplement that 447am uses is a great idea, as it can be tuned to the course itself and can bypass the usual bureaucratic delays in return.
I also think that DD's point about numerical summaries is right on and is much the way I have used them: they are useful in spotting outliers. Reading evals for patterns of comments, both good and bad, also reflects my own practice as an administrator.
Question: do any folks use mid-semester course evals of their own to get some feedback in mid-stream? I've found that to be helpful when instructors are aware that they are struggling, or when I've heard a pattern of student concerns about a course, it's a suggestion I make to the instructor.
As for the value of evaluations in actually considering how to improve my own teaching, well, my response is mixed. Sometimes there's something of use, but more often than not there isn't. And I know many of my tenured colleagues make a practice of ignoring the university-administered evaluations. This isn't to say that they don't care about student feedback, many of them do, but after they get tenure, they abandon the university forms because they are so often useless in evaluating how a class actually went.
Anyway, I thought this was a great post!
Why you insist on staking my future on the opinion of some kid who still has the dry-cleaning receipt for the tux he rented from prom night kicking around in the back seat of Mom's Honda is beyond me. And I say this as one with a file cabinet full of gold stars from these students.
As far as returning student work, two months is inexcusable. Getting through a stack of 160+ essays for 8+ classes of students who demand instant gratification is another matter.
We also have merit pay raises at my university (in addition to a guaranteed cost-of-living increase), but they can be applied for in teaching, research, or service (but not all three). To apply in teaching you do need superlative evaluation scores, as well as other notable achievements. . . but to apply in the other categories you just need to be performing as a teacher at a reasonably strong level.
Well...hmm. While I don't think that student evals should be the only way teaching is assessed, I certainly see the value. You're trying to reach that kid, connect with him, teach him something. And if you aren't then you aren't doing some part of your job.
Now, that's not to say student evals are the last word. We've all had teacher who taught us something we realized later was really valuable but at the time hated. Or maybe learned something from a teacher we didn't especially like and were therefore inclined to give lower scores to.
At the same time, students are the audience for you teaching. What they say ought to count for something. Just as long as it isn't everything.
The scantron forms actually do matter. And at my universities, we don't receive every scantron--we receive numerical summaries. Importantly, this means that an individual who bubbled all "excellent" or all "very poor" is aggregated in with the masses. All received forms are tabulated (none are thrown out, ever), and you can't tell how many forms may have been filled out by bubbling only one column.
Of course, if your numerical summary is all "excellent" for every student for every item, I figure you pretty much rock--statistically utility be damned!
I try to be very transparent with my students about everything from why I choose the types of assessments I do to why it takes me X amount of time to grade things. Since my students generally understand my rationale and goals, I often get very good feedback from them, especially in person. (The ones who bother to talk to me in person are usually the ones invested in the learning process, so they're not giving unhelpful advice.)
I did have an experience, teaching in a seminar sort of setting where we taught high school students THROUGH the CC, where I got universally glowing reviews from students, highest in the program, but one student and his friend decided it would be funny to give me shit evaluations for their entertainment (and told me, quite nastily, they were doing so, because they were angry their parents made them take the course). I didn't really think anything of it -- some high school kids are jerks -- until I got a very nasty, ugly, unprofessional e-mail fired at me from the director of the program telling me that my "behavior" was totally out of line and she was appalled and upset and I was not the sort of person who should be working with students and so on and so forth. I was given NO opportunity to defend myself, it apparently did not occur to her that 2 bad evaluations making specific and large accusations set against 200 stellar evaluations that didn't mention them might raise a red flag, nor did it occur to her that the teachers observing the program would probably have mentioned it, and I have not been invited back to the program.
It honestly still upsets me a little bit to think about it.
I've done this; one of my own, with very open questions. It was for a course that was in a lot of ways very difficult to prep for -- it was a brand new science breadth course for visual arts students (scheduled, by the way, for one three hour section a week Friday afternoons 1-4pm. Admire the demented genius of that particular piece of educational design.)
I had my own pre-, and mid-term, and a post- questionnaire, and I did make decisions based on the results of all of those. An unexpected outcome of the tests, particularly the mid-term one, was that the students really seemed appreciative that I was really trying to make mid-course corrections and was taking their input seriously.
While I specifically note the wonderful things that certain professors do, I rarely write constructive criticism unless I am in a large class. I think my writing style (word choice, syntax, etc) would give me away, and in my field, I end up taking multiple courses with the same faculty members. I wouldn't expect negative repercussions, but I'd rather remain truly anonymous just the same.
Example: on our students' scantron course evaluations one of the questions is something like 'Faculty member was in office during his or her scheduled office hours.' Faculty across campus (including myself) often get very bleh evaluations in this category because I honestly think students really don't seem to understand what this question is asking. I know it shouldn't, but it really bothers me that I consistently get average scores in this category by doing very well in other categories. All faculty and administrators seem nonplussed by the question but I just wonder why we're asking students something which produces false results.
To try to solve this problem on my own, a few days before students do their scantron evaluations (these are administered by a staff member from an outside office) I make sure to directly review a few parts of my syllabus and, in particular, emphasize my keen willingness to meet with students during office hours or work with students during individual appointments that are above and beyond my regularly scheduled office hours. Despite this, I still get bleh scores. It seems clear that students interpret the question as 'was faculty member in his/her office at all times ready to assist me whenever I had any issue remotely connected to the course or even not?'
I can see that in the wrong situation with an overly fastidious department chair, angry program director, someone that didn't know that scores are generally off for this category this being an issue for faculty on our campus. Moreover, the results of this question are often even more negative for adjunct faculty who, because of a lack of office hours location, frequently have office hours in what could only nominally be called 'offices.' As an example, adjunct faculty in one department hold their office hours in a maintenance closet which is labeled "Maintenance Closet". Sometimes students have a hard time finding this office or are confused by the label for obvious reasons although I kind of like the idea of treating office hours as a chance to have 'maintenance' done, actually. But that's another post.
In the specific program I administered, we have a 22 question scantron form for the fall course. Each fall I explored changing questions and changed or tweaked several in the last 6 years.
But, as eyebrows mcgee's comments show, they're not.
A problem with student evaluations at my CC is that they're way too high. On a 1-4 scale where 1 is excellent and 4 is terrible, the college-wide average is 1.2. I've actually had administrators and tenure review committees come after faculty members because their scores of 1.5 or 1.6 were "below average."
From an anonymous student course evaluation:
"teach better, expect less, grade fairly"
Translation for those not familiar with undergraduates:
"teach better" =
"You did not teach directly from the book [which I didn't read], so when you actually expected me to know terms and ideas presented there, I felt and looked stupid. And even though you explained every assignment and made yourself available for office hours and e-mail questions, I never bothered to avail myself of those extra teaching opportunities because I thought I knew everything already. You are supposed to stand in front of the class everyday and keep me entertained while you teach me stuff I think isn't important. Don't you know that?"
"expect less" =
"Even though this class is a writing course, expecting students to write 3 short papers and one medium-length research paper is JUST. TOO. MUCH. WORK. And those in-class writing assignments designed to re-enforce the lesson just taught earlier in the week were JUST. TOO. MUCH. WORK. And you know your requirement that I attend to page length, double-spacing requirements, and proper margins? It's JUST. TOO. HARD to make sure all that is correct before I submit a paper. Oh, and proofreading... Are you kidding? That would require I actually pick up a dictionary or style manual, much like the one required for the course that I never bothered to read &/or purchase."
"grade fairly" =
"You should know you should just give me a B for showing up and being bored, and if I even put forth any effort, that means an A! It's not fair! My other teachers, whom I shall never name, told me I right good! Who are you to tell me I have problems? You're so arrogant and smug, standing up there with all that knowledge and experience! You're not better than me! I asked the 3 people I sit near, and different people got different grades, so that means you're just not grading fairly! I never asked you to explain why I didn't earn a higher grade, so you obviously just hate me! And my little dog too!"
Needless to say, I've quit adjuncting...but I miss teaching.
I agree with db's comment immediately above. I've been a student longer than most undergraduates, and I pay attention to what works for me and what doesn't, so I like to think I have something of value to say. However, I'd be horrified if my comments were not taken with a grain of salt at least.
This is not least because students often have different goals for a given course -- different from each other, different from the goals the instructors have for them.
For example, this semester I had a gen-ed science requirement and an upper level history seminar. My goal for the first was to pass the damn thing with a B+ or better. My goal for the second was to write the best history paper I've ever written. If the science course had been at all demanding, you might have found me complaining that it was "too much work" (although perhaps not on the evaluation form). Context matters: if it took too much time away from my history thesis, then it was "too much work."
Maybe it would be useful to ask on evaluation forms what the student expected to get out of the course, or what their goals were for the course. If you could get honest answers, it would be a useful clue in how to interpret the other answers.
For example, in a recent mid-quarter interview with one class, students told me to convey to their instructor that he shouldn't ask them questions when he already knows the answers: "Just tell us the answer and move on." Students, in other words, want to be entirely spoon-fed. Others express the belief that instructor of x ethnicity shouldn't be teaching a class on y ethnic cultures. Helpful: Professor X, would you kindly change your race and abandon your 20 years of research in y?
So when the end-of-quarter evaluations (the ones that go on the record) roll around, students are evaluating the instructor not so much on whether he's actually teaching well (e.g. using interactive learning that promotes collaboration and critical thinking skills), but on whether he's teaching them in a way they feel is best (spoon-feeding) or if his ethnic background gives him enough authority to teach a course.
It's all very frustrating, so we're looking at developing alternative forms of evaluation, ones that actually measure student learning in the course rather than the instructor's personality or students' (often very skewed) perceptions about what constitutes good teaching.