Sunday, June 03, 2018

Should Go Without Saying, But…


Savannah State U is apparently taking “DFW” rates -- that is, the total percentage of students in a class who get a D, or an F, or who withdraw -- above 25 percent as prima facie evidence of poor performance by the instructor.

In a more perfect world, it would go without saying that this is a terrible, horrible, no good, very bad idea.  But we have the world we have, so I’ll say it, publicly, in writing, with my real name.

And it’s not only because the policy is being applied retroactively, as bad as that is.  Even if it were announced upfront, it would be a terrible idea.

The key reason is that the same professors whose performance is being judged assign the grades.  That creates a basic conflict of interest. A professor who inherits a high-risk group will probably fail the standard unless she lowers the bar, which it is in her power to do.  Over time, the consequences are easy to predict. Grade inflation, at least on the lower end, would become the new normal.

In the community college world, that would be particularly galling.  Unlike the Ivies of the world, community colleges have proved relatively immune to grade inflation.  It would be a shame to give that up now.

In sequential courses -- the 101 class that leads directly to a 102 class in the same field -- I can see an argument for using grades in subsequent courses as indicators.  In a sufficiently large department, if the average pass rate for 102 for students who have taken 101 is 80 percent, but Prof. Smith’s former 101 students consistently hover around 40 percent, I’d consider that a red flag about Prof. Smith.  It’s an indicator that a closer look is probably warranted. Though an indicator like that only works when courses are sequential and departments are large enough to create meaningful sample sizes.

But even there, I insist on the difference between a red flag and a black mark.  A red flag indicates that a closer look is warranted. Upon that closer look, we might find other factors playing into it.

That’s how I used student course evaluations, in my deaning days.  I wouldn’t pay any mind to small fluctuations in the middle. I’d only look at the bottom few percent.  When the same names appeared there time after time -- which a few did -- that was a red flag. It indicated that a closer look was appropriate.  For all of the criticisms of student course evaluations that I’ve seen, I haven’t seen one that convinced me that the “red flag” function was invalid.  Most of the time, the closer looks revealed real issues. (In one memorable case, they didn’t. The professor in question seemed fine. Not amazing, but fine.  I observed his class and came away thinking it was solid, maybe a little above average. But students hated him. I never did figure out why. When I asked a few of his former students, all I got was “he’s a %$#@.”  I didn’t consider that actionable intelligence. He came away unscathed.)

From a faculty perspective, upholding standards can be draining.  You see how hard some students try, and it can break your heart to tell them they fell short.  But sometimes that happens. It’s draining enough without adding fear for your job to the mix.

Until grading is separated from teaching, the idea of judging teachers by the grades they give will be hopelessly compromised by a basic conflict of interest.  That should be obvious, but apparently...