Sunday, July 12, 2015


Knowing Why They Know

Maybe political theorists just have a certain way of thinking.  That might explain it.

Betsy Barre, the Assistant Director of the Center for Teaching Excellence at Rice (and a fellow political theorist by training), fired off a set of tweets this weekend that made a lot of sense to me.  She was addressing the disconnect between relatively small gender effects in quantitative student evaluations and the relatively large gender effects in qualitative feedback.  In other words, while the “scores” of male and female professors aren’t very different, the comments they get are.  As Barre put it (quoting tweets):

Unscientifically, but with many years of experience of faculty evaluation (and of teaching), I think Barre is onto something.  

Last week, in the post about tolerance for ambiguity, I mentioned watching students in a writing class grasp for words to explain a hunch.  Once they found a cliche, they grabbed it and held on tight; even if it didn’t really work, it offered a sense of familiarity.  It came closer to feeling like an answer than anything else they had at the ready, so they used it.  My job as a teacher was to help them get past that response, and to develop the skill of articulating their own answers.  

But teaching evaluations are supposed to be uncoached, by definition.  (I once received one in which a well-meaning student offered that the class “helped me write more clearer.”  Great…)  In the comments section, students can easily fall back on long-ingrained habits.  

The good news, to the extent that there is good news, is that the unconsciously loaded language doesn’t always go very deep.  When asked to express the same ideas in a different form, such as numerically, the differences shrink notably.  

From a faculty perspective, of course, the larger worry may be less what students write and more how it’s read.  Here, too, Barre got it right: a single, isolated comment about almost anything should probably be disregarded, but if the same issue comes up over and over again, from both good students and bad, there may be something to it.  If I saw one or two students complain about too much reading, I’d write it off to “college is supposed to be hard.”  If a dozen students complained about the professor chronically showing up a half-hour late for class, I’d follow up.

I know it’s an article of faith among many faculty that student evaluations are meaningless at best, but I’ve been struck over the years that they’re usually in the ballpark.  They’re imperfect, of course, and I’d oppose any move to take them as gospel.  But if you observe enough professors, and read enough evaluations, you’ll notice they tend to get it broadly right.  If the sample size is decently large, the wisdom of crowds seems to kick in.  Yes, hot-button social issues can produce some jaw-dropping comments, but they’re less common than you might expect.  

Barre’s juxtaposition of quantitative and qualitative feedback suggests that some of what comes across as biased or inappropriate judgment is, in fact, an artifact of weak writing skills.  From the perspective of over a decade of administrative experience, I think she’s right.  

None of that is meant to discredit or discount claims for social justice.  It’s just to say that when we read student comments, we have to remember that we’re reading student writers.  Student writers tend to fall into certain traps.  They know what they want to convey, but they don’t always know why they know, or how to explain it in language that people with graduate degrees would consider appropriate.  When they shift to another register -- numbers -- they become more surefooted.

Thank you, Professor Barre, for connecting the dots.  And sorry for the cliche.

I like this is as a possible explanation. I wonder if there is any way to test it.

More personally, I wish I could figure out why my evals are almost always higher in the spring than in the fall (I've only had one academic year that didn't follow this pattern, where the fall numbers were abnormally high). I don't think I'm a better or worse teacher in either semester, but my students clearly do.

For me, the problem is that if numbers don't lie, but students can't explain what those numbers really mean, how am I supposed to use them to improve my teaching?
We used a bubble-sheet form for ages, where I think I got one or two free responses per decade, so real feedback on that study. We recently shifted to an on-line system where response rates tend to be much lower than before, so statistics go out the window but the ones who respond take time to write comments. Too early to interpret these, but they seem to be writing to other students (like it was The Site That Shall Not Be Named) rather than me or the Dean.

Apropos what Sapience wrote @8:53pm, my evals seem to correlate only with how energetic I felt in a particular class in a particular semester (usually load related) and how well they are doing (retention rate).

I'd suggest that Sapience take a look at some well known variables: when does the class meet, did the class fill early, how academically prepared are the students, and did they pick YOU over an alternative prof. Any of those could have a fall-spring variation. They are less likely to rate your work highly if they don't want to be in one of those Death Valley afternoon sections.
Our online evaluation system also suffers from very low response rates (which kind of defeats the whole purpose since the data is meaningless if less than a handful of students actually submit the evaluation).

My grad school also used an online system and had an interesting incentive for getting students to take the time to respond. If you submitted evaluations for all of your classes by the end of final exams, you were automatically given access to the data and comments, which could be sorted by professor or by course, early enough before the following semester that you had time to add/drop/swap sections based on the data. There may have been some minimum number of a times a professor or course had to be rated before they appeared in the database to account for the reliability issues with small data sets.
I've seen a big difference in the types of written, qualitative responses between genders. Female professors always, ALWAYS have stories of repeated, inappropriate, sexist, personal, irrelevant, mean, ad hominem comments. Male professors are usually shocked when they receive their first one. Female professors are shocked when they get fewer than 3 per semester.

Is this only at the 3 schools I've been at?
CCPhysicist: the pattern of my evals being better in the spring vs. the fall has now been true for almost a decade, and has persisted despite changing schools. For the last two years, I have taught the same class at three times during the day (for example, my schedule last semester was 12, 1:30, and 3, T/Th). There is usually very little variation between the evals individual sections give me in a given semester (and I usually think of them in aggregate anyway). My students are generally all very academically prepared. I have only had one semester where I taught classes where students didn't have the option of taking the same class with another instructor--and those were the classes where I had the highest student evals of my entire career. They were not an exception to the general pattern, however.

Sapience: Do you happen to teach mostly freshman? I could see fall being their "welcome to college-level expectations" semester, which would mean they would tend to rate all of their professors lower for not being the forgiving, nurturing experience they were used to in high school. By spring, hopefully they've figured out how college works and are no longer blaming you for being a college professor instead of a high school teacher.
@Anonymous 12:35 PM: This seems to be the case most places, though the virulence varies somewhat. I think the plan is to pretend it doesn't exist and just let things be meaninglessly harder for female faculty.

Anonymous @12:35pm -

That is a common problem. It is almost pointless for a male professor to provide advice on certain teaching situations, because what is interpreted as appropriate (and effective) behavior by a man is considered bitchy by a woman.

Apart from the usual issues of misogyny, some of this probably originates in K-12 where women are seen as lower level teachers who need a Teacher's Edition to be able to know the correct answers to student questions, not professionals.

Sapience @9:38pm -

I guess you just have to ignore all of my observations. Maybe they are just happier in the spring!
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?