Monday, November 12, 2007

 

What Kind of Grader Are You?

A left coast correspondent writes (this is a long one):

We have had a kerfluffal blow up today at my CC. This morning our VP
of Student Services sent out the following email:

"I wanted you to be aware that we recently received a request for our
grading records under the Public Records Act. We have secured
direction both from our legal counsel as well as the [California]
Chancellor's Office as to our mandate to comply with the Act. The
requestor asked for the grades assigned by every faculty member,
sorted by term, from Fall 2003 to present, to include faculty name,
course name and number, and total grade distribution.

We sent the grade records to the requestor yesterday. They include
all evaluative grades (A-F) assigned by each faculty member, along
with non-evaluative grades such as "I" (incomplete), "W" (withdrawal),
"MW" (military withdrawal), "IP" (in progress) and "RD" (report
delayed) for fall 2003 through summer 2007. No student identification
information of any kind was included in the file.

While we are aware that this type of request has been made to many
other colleges and universities around the state, we do no know what
the requestor intends to do with information. I understand that this
might be very concerning to faculty and I want to assure you that we
sent only the information that we had legal obligation to provide.
The Academic Senate will be discussing the issue at an upcoming
meeting. I urge you to join the discussion and/or contact me if you
have any further questions or concerns."




Last year's President of Academic Senate posted this reply:

"Hi Everyone,

Actually, the Academic Senate has known about this for awhile and the
discussions have appeared in our minutes. This issue has also been
discussed at strategic council. The information, as far as I
understand, was not private information. It did not include any
identifying student information. The information released was really
information about us as instructors. How many A's do I give in my
Philosophy 6 class versus how many A's does Dr. H*** give in her
classes? What are the grade distributions? Do I give more A's with few
F's or am I a professor that has earned the name of "C Minus T***?"
It's that sort of information that was released. All data that
identified students had to be stripped out before the requested
information was released. My understanding also is that advising the
Academic Senate of this information is really a courtesy so that we
can let everyone know what is happening and have discussions about the
meaning of grades, the difficulty of assignments, grading rubrics,
etc. for the various instructors in our departments. In other words,
if a student signs up for my Philosophy 6 class and another student
signs up for Dr. H***'s Philosophy 6 class, is it fair that I give
three scantron exams while she requires three 20 page papers? (This is
a fictitious example. J ) What if she gives mostly C's with very few
A's while I give all A's? How do we work together collaboratively, as
a department, to ensure some kind of uniformity of assignments and
assessments?

Legally, there is nothing we can do about the release of the
information—again, as I understand it. This does however offer us an
invitation to discuss, in our departments, the meanings of grades and
the rigor of our assessments.

I don't' mean this message to minimize the apprehension caused by this
announcement. I am apprehensive and expect that my students will soon
tell me that a new website has listed me under the "sucks badly"
category. However, until the law changes, we may want to focus our
attention on how we can improve instruction across the curriculum.


Nope, no hot button issues here!

Back in the day, professors used to post student grades on their office doors at the end of the semester. Enterprising students could use that to suss out who graded 'tough' and who was easy. Not that I ever dreamed of doing such a thing.

Now, of course, FERPA prevents that. But Open Records Laws, on the other hand, treat aggregate grades (as opposed to individual ones) as public records, open to public scrutiny. So professors (at public colleges and universities, anyway) are in the position of having to guard individual grades closely, while having years' worth of data posted on the internet. It's not a contradiction, strictly speaking, but it's certainly an odd juxtaposition.

There have always been student grapevines about which professors are easier than others. That's not new at all. I recall being warned by dormmates at Snooty Liberal Arts College not to take a particular professor who was famous for giving nothing but B-minuses to all and sundry. (It was a point of pride that I took him and did better than that.) The difference is that now the grapevine will have access to actual data.

From a dean's perspective, there's actually something useful in knowing – with data – that Prof Jones grades much more easily than the rest of the Basketweaving department. It gives some context for the student evaluations. If a professor grades unusually easily, I'm inclined to discount positive student evaluations. If a professor is known for strictness, I'm inclined to cut some slack on student evaluations. Of course, if a professor can't even buy love with easy grading, then I can be pretty confident that there's an issue. And a professor who grades tough but still gets glowing reports is probably doing something right.

All of that said, I'd be wary of putting data out there that isn't normed by class. In other words, the grade distribution in a particular class may reflect the professor's grading, or it may reflect the location of that class in the curriculum. Remedial classes, for example, almost always have much higher 'fail' rates than upper-level electives, regardless of who teaches them. So a professor who teaches a lot of remedial and first-semester classes will look like a tougher grader, all else being equal, than a professor who teaches mostly courses for majors. (By the time you get to grad school, grading is pretty much reduced to 'A' or 'Not A.') Telling me that there's a higher fail rate in remedial math than in calculus doesn't tell me anything about the instructors or the relative rigor of the courses; it tells me that the only students who take calculus here are students who really mean it.

At Proprietary U, one of my least favorite tasks as an administrator was to come down hard on faculty whose drop/fail rates were “too high.” (I don't usually recommend foot-dragging, but when it came to that, I foot-dragged like it was going out of style, until I found another job.) The justification, to the extent there was one, was that professors were supposed to find ways to reach even the more challenging students. In practice, of course, it resulted in lots of extra credit and some very creative curving. I considered it a stupid and offensive application of data that, treated differently, could have been useful.

In the age of the internet, certain kinds of discretion and/or secrecy just aren't viable anymore. If the data will escape, I think the burden on higher ed is to come up with ways to frame it productively. Let the colleges beat the profiteers to the punch, and put the data out there in ways that reflect what we know to be true. I'd suggest lumping several years' worth of data into a single report, and reflecting standard deviations from course means, rather than raw grades as such. If there's a particular professor who is consistently, conspicuously above or below her peers who teach the same classes, then I know I have something to examine. (Higher grades could reflect easier grading, better teaching, or the luck of the draw.) As any experienced teacher can tell you, some classes are stronger than others, which is why I'd put out rolling averages that encompass several years at a pop. Looking at one of my sections doesn't tell you much; looking at every section I taught over several years just might.

In a perfect world, a professor who fell unknowingly into 'outlier' territory would take being singled out as a wake-up call. Of course, in a perfect world, students would be motivated solely by love of learning, nobody would need remediation, and it would only rain at night. More likely, I would expect to see considerable defensiveness, attempts at blame-shifting, flat-out denial, and the usual huffing and puffing. Still, I don't have a conceptual problem with holding professors to account for how they perform the grading aspect of their jobs, since I (and most students) consider it part of teaching. I'm just concerned that if we don't take ownership of this in a thoughtful way, others will, and they'll do it in the stupid and thoughtless ways we rightly fear.

Wise and worldly readers – what do you think?

Have a question? Ask the Administrator at deandad (at) gmail (dot) com.


Comments:
I think that looking at grades isn't the same thing as assessing student learning. A prof whose grades are low might actually not be a good instructor and a prof whose grades are high might actually be teaching the little darlings something.

I do think there is something to be said about a complaint by students that the assessment methods vary by section of the same course. If I give 3 multiple choice tests and my colleauge gives 3 5-10 page papers, the students must do more work for her grade than for mine. It is also easier for me to do my grading (and more objective, assuming the questions are good). I could easily handle many more students than she can, but we get paid the same. That is unfair both on the student level and on the work-load level.
 
One problem with grades... I've got a student who legitimately cares about learning, he knows exactly what he's interested in and pursues that.

But, in his distributional requirements... He struggles to complete assignments. He recognizes the arbitrary nature of much of what he's being required to do in academia.

He writes well and he's mathematically literate, but doesn't want to write another US History 1 essay that's been written 10,000 times. He sees it as an academic exercise that he's got to complete.

DD, I know you've written about the different types of A students before, but how is a student like this well served by academia?
 
You seem to take for granted that profs who give higher grades have higher evals. But a recent study suggests there is actually no correlation. Why base your decisions on something you only have anecdotal evidence to show?
 
Your correspondent appears to be making reference to the "Pick-a-prof" operation that made a similar request of our CC (and every other CC and public uni in the state) last year. In our case, they only collected one year of data and had to pay for the processing required.

They offer a "professor" account, which allows you to comment on a particular course, although it is buggy as all get out if you use a decent browser rather than M$ IE. In addition, their grade data are incomplete (no summer data and missing data for one of my courses that is taught in a "non traditional" format) and current course data are flawed (same reason, ignorance of classes that don't meet at a scheduled time).

It has raised some interesting discussions, particularly as our department is interested in whether kids learned rather than passed. It is now possible to see the grade distribution in classes taught by people have been perceived to be "too easy" or "too hard" when grading, and the results are quite interesting.

My own were also interesting, albeit biased (the data include a particularly superior group of students from last year). My lecture grades are bimodal (A and C), while my lab grades peak at B giving an overall flat distribution.

None of which matters to me, of course, since the only evaluation I care about is when students at the nearby Engineering College contact me and tell me how much better prepared they are than the "native" students.
 
thanks for this discussion. It's important and, I think, confusing. How are we to interpret our own data? I'm one of the "hardest" graders who teaches the entry level writing at a CC, and this class has a spectacularly high drop rate (many of whom don't drop and end up getting "E"s, the new F). So the data isn't actually accurate for students who start and finish the class.
I'm in my class every day, and I don't understand the data on my classes. The students who stay in class learn and, subsequently, get As and Bs. The students who don't come to class get Es and Ds. Seems pretty simple, and yet...
What worries me the most is that we seem to have a generation of college students who are choosing classes based on the likelihood that they will get "A"s. What happened to intellectual curiosity?
 
Is there any wonder undergrads are obsessed with grades? When we set up a situation that's so insanely credentialist, being obsessed with grades becomes rational.
 
One of the unspoken problems of grading is, at times, internal consistency from the grader. That is, when faced with a stack of 60 papers, saying the same or same level of comment becomes tricky.

As an composition instructor, I struggled with this problem. As a result, I created a Word add-in that allows me to capture, arrange and insert the more commonly used feedback comments (defining grammar and style errors and their remediation, errors in understanding the assignment, etc).

By creating a database of comments, I was able to increase my grading time by one-click inserting the common comments and focusing my grading time to exploring the individual student's strengths and weaknesses.

As a side benefit, I was able to standardize my grading.
 
tfc -- can you arrange for him to proficiency out in some useful fashion? That's what I did in undergrad.
 
As someone who has submitted hundreds of open record and open meeting requests to my community college, I find this post very interesting.

A massive request like the one the correspondent describes will undoubtedly catch the powers-that-be off guard and will trickle-down the hierarchy of fear.

Yet, I’m unsettled by the comment, “If the data will escape, I think the burden on higher ed is to come up with ways to frame it productively.”

The information is public. It doesn’t “escape.” Anyone has a right to request it -- for any reason.

Attempts to “frame” the information are merely transparent efforts to limit how the information can be used and who can access it. That approach will bring colleges a nice lawsuit.

In Kansas, where I live, an organization receiving state or federal funds must comply with Kansas Open Records Act requests within three working days or possibly face action from the Kansas Attorney General. If the information takes longer to assemble, the keeper of records must offer a valid reason why he or she cannot immediately comply and how he or she plans to comply in a prompt manner.

I agree with DD’s suggestion to develop delivery methods for public information. If your organization has to create a method under the clock with the media (both student and local) and the state attorney general watching, mistakes will happen.

Be thankful the request did not focus on staff or faculty e-mail accounts – which are also public information.
 
Miguel -- Your comment that "[a]ttempts to “frame” the information are merely transparent efforts to limit how the information can be used and who can access it. That approach will bring colleges a nice lawsuit" is simply false.

Any public agency -- or private one -- frames any information that it releases. What I'm suggesting is that we provide context -- class norms and standard deviations -- to prevent the otherwise predictable mistaken interpretations that we know will happen. Show that average grades in remedial classes are lower than average grades in upper-level classes to prevent the impression that upper-level classes are somehow easier.

Readers of the data would still be free to slice and dice it any way they please. That's the nature of data. But if we don't provide some pretty fundamental context, we'll spend years on issues that aren't really issues. There's nothing untoward about that.

I'm bothered, too, at the "gotcha!" tone, the "powers-that-be" crap. Life isn't good guys and bad guys, and it isn't all just one big conspiracy. There are competing interests, each combining some sense of higher morality with some basic self-interest. Some are 'better' than others, but I gotta tell ya, anybody who resorts to language like "powers-that-be" loses a lot of credibility fast.
 
Miguel -
If you have done this hundreds of time, surely you must know that all public records laws must respect other laws, such as FERPA, so not all information generated by a CC or university is public. This particular set of data is strongly protected by FERPA so any organization would be extremely careful to be sure that nothing slips into the computer file that would expose the institution to legal liability. Reporting data for a small class is, for example, potentially problematical.

You might also know that there are honor's classes that will have a different mix of students than other classes do, making that instructor look better than the norm. You might not know that some institutions make it easy to withdraw (hence easy to avoid an F) while others do not, and that more experienced students are more likely to put this to good use. It is important that people who are unaware of the many variables affecting grades know a bit about what they are looking at.

Nonetheless, it would be great fun to see if College of Ed curves are still as high as they were when I saw a data set for a major uni back in the last century. That was an eye opener, particularly when matched with the entering SAT data.
 
Strikes me the college could have released the information by course and not section. That would have sunk the ship right there
 
What a great post. My previous employer responded to this proactively by making every section's grade distribution available on the registrar's website, searchable by department or class. This wasn't secret, but they didn't advertise it either. One day some enterprising individual printed out the previous semester's grades from our whole department and put copies in all faculty mailboxes. Surprise, surprise, there was an angry email from professor X, whose classes were always waitlisted--and happened to have an average grade around 3.8 (the department as a whole was around 3.0).

Of course students use the information to pick classes. One response to this is to worry about "framing" the information so we don't get bashed about it. Another is finding clever ways to re-conceal the information. A third response is for departments to seriously address the equity concerns that revelations of different grading raise. Grades matter--when two people get different grades in a course solely because one instructor's grades are statistically significantly higher, that is a problem we must address.

Universities can also use these revelations as a wonderful opportunity to tackle grade inflation. When I began work at that school, I was told that the department expected introductory class averages to be under 3.0 and upper level classes to be slightly higher. Public grade distributions made this possible. It was very obvious which departments cared about inflation and which didn't--and the ones that didn't were belittled for it.

My current employer doesn't release this information, and I hadn't realized I could FOIA it. Perhaps in a few years when I have tenure...
 
A 1999 study indicates that grade inflation may be, well, inflated:

Jeremy Freese, Julie Artis, and Brian Powell. 1999. "Now I Know My ABC's: Demythologizing Grade Inflation." Pages 185-94 in The Social Worlds of Higher Education, Bernice A. Pescosolido and Ron Aminzade (eds.). Thousand Oaks, CA: Pine Forge Press.
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?