Friday, March 13, 2009
If It's True, It's Fascinating
The comments to the story are worth reading. Cliff Adelman suggests that Rojstaczer is not to be taken seriously. I tend to take Cliff Adelman very seriously, so I'm not going to endorse the findings of the study just yet.
Many of the other comments, though, are variations on “adjuncts are scared of bad student evaluations, so they inflate the grades.” I'm not convinced.
Selective liberal arts colleges tend to be expensive, and to have relatively low adjunct percentages. That's part of what they sell. Community colleges as a sector tend to have higher adjunct percentages than just about any other part of higher ed, yet the study singles them out as immune to grade inflation. If the study is correct, it pretty much blows the “scared adjuncts cause grade inflation” theory out of the water. The findings suggest that other variables – local culture, most likely – are far more powerful.
Anecdotally, at least, I'm inclined to support the 'local culture' theory over what I'll call Adjunct Determinism.
When I was at Proprietary U (the story doesn't mention whether Rojstaczer studied for-profits), grade inflation was rampant, and was almost official policy, but only on the low end. You could be as strict as you wanted with A's, but too many F's would get you fired. Although some of us objected strenuously to a fog-the-mirror standard for passing, when enrollments started to dip, it was made abundantly clear to the faculty that retention was to be pursued at almost all costs. Over time, naturally, students started to figure that out, with predictable results.
At Snooty Liberal Arts College, when I was a student, I don't recall too many people failing too many things. A's were tough, but B's were pretty common and D's or F's rare. And there was no such thing as an adjunct there.
At Flagship U, the TA's often graded much more demandingly than did the senior faculty. I interpreted that at the time as a form of insecurity on our parts. Frankly, I still do.
At my cc, I don't hear much talk of grade inflation, either from faculty or from students. (I hear plenty of complaints about other things – some valid, some not so much – but not about that.) That's not to say that we're unconcerned with student success – quite the opposite – but that we want that success to actually mean something. The adjunct percentage is far higher here than at SLAC or Flagship U, yet the grade inflation much less. That's why, despite my admiration for Cliff Adelman, at least this part of the findings sound right to me.
(On the flip side, the study singles out Princeton as having successfully attacked grade inflation. Did Princeton have a massive wave of adjuncts at some point?)
There are plenty of reasons to object to the adjunct trend, and plenty of reasons to object to grade inflation. But to assume that all bad things necessarily go together in a neat little package is a bit indulgent.
More likely, local expectations color grading, both for full-timers and for adjuncts. At the snootiest, high-end schools, where the adjunct trend is almost unknown, the culture of student entitlement is quite powerful. I haven't seen much of that at this level. If anything, the issue here is the opposite. Here, so many students come in with the emotional baggage of years of poor performance in the K-12 system that the challenge is in getting them to identify themselves as college material. At SLAC, getting a C was either an affront or a source of shame, depending on your mood; I've seen students here celebrate them.
I don't know if this study is actually valid or not, but it passes my gut test in a way that the 'scared adjuncts inflate grades' theory just doesn't. If Princeton has to issue a policy against grade inflation, but most community colleges don't, then I have a hard time pointing the finger at adjuncts. It's just not that simple.
I saw a similar thing at, incidentally, Duke. (Duke Law, specifically.) There, as at many top colleges and law schools, if you were smart enough to get in, they weren't going to flunk you out. Competition at the top of the class was keen, but nobody was particularly worried about flunking out.
And I think you'd find at many top-25 colleges and similarly-situated law schools (I can't speak to other professional programs), that even fairly significant honor code violations and criminal convictions rarely result in official discipline, let alone expulsion.
On the other hand, without any evidence, it seems more logical that the more time a person spends in a faculty position, the more his/her grades could be colored by local expectations.
If they were looking at average grades for similar courses, they'd probably find CCs to be lower (thus less grade inflation).
If they were looking at distribution of As and Bs when years ago the students would have earned Bs and Cs, CCs would probably be found to have grade inflation. This is because CCs have a lower course completion rates than other institutions -- our students drop out, flake out etc... because they NEED to be at a CC in order to develop academic skills.
I agree with DD that adjuncts are not to be blamed. I think long-term school culture would more likely be the culprit.
This raises two issues in addition to the ones in DD's post. First, I wonder if there is a need to correct for vast changes in selectivity over time. That has luckily happened at my SLAC during the study period. Still, should the institution strive to keep its grades where they have been historically despite improved performance of students over time? Since the improvements are gradual, resisting inflation could be tough.
Second, I see potential conflicts between keeping grade inflation under control and having solidly positive results of assessment outcomes. Now I'm still early in my career and in the Humanities, so I'm still working to completely understand the worth of outcomes assessments (aside from their proof that the school is actually teaching and is therefore worthy of granting federal financial aid and attending). But shouldn't outcomes account for grades somehow? So if the GPA in the major among graduating seniors is 3.0, wouldn't that suggest that students might have only acquired a B's worth of the material from the course? I just don't know and wish someone would sit down with me an slowly explain how to make outcomes assessment be more than a giant time suck and flexing of my BS musculature.
I know people who have taught the same class with the same textbook at both. In community college, the assignments are easier and less content is covered during the class. For example, a cc class using a textbook with 24 chapters would cover 16-18 of those chapters and a 4-yr college class would cover a minimum of 22 chapters plus outside readings related each chapter’s content.
There is also a difference in regards to course evals. I teach my cc classes with the same expectations as my 4-yr college class and the cc evals are much lower because students are not accustomed to meeting higher expectations (which means more reading, studying, and writing). Grade inflation in some form or another happens everywhere.
In regards to adjuncts, their instructional effectiveness is all over the map, same as with full time faculty. This is true in cc, 4-yr, public, and for-profits. There’s just no way to pigeon-hole adjuncts!
Unlike the economy, grading has a ceiling, so grade inflation can only go so far. In fact, I would love to see a forecast of "Grade inflation" that predicts that, if the trend continues, most students will earn a 5.2 GPA.
I think the concern about which this general discussion dances is the notion of a lowering standards.
First, let's accept that no school will accept an AVERAGE GPA of 4.0 (and of course, to get that, we have to assume all students achieve a 4.0, right math-folks?) We can therefore rule out ever really having anything resembling "real" inflation.
That said, what affects grading?
1. Expectations of the faculty. This is really where most of the discussion seems to center. The unspoken assertion here seems to be that as faculty we are allowing poor work to count for "acceptable" work. We have lowered our expectations, and thus grade a bit more "generously."
2. Quality of the students. Have you ever had a graduate-school lclass where 50% of your students scored 800 on the Math portion of the GRE? I have. And all Type-A personalities. Is it really grade inflation if at least 50% of the class earns an A? Really?
3. Expectations of culture. Here is where the other discussions come in, and quite frankly, I am curious how the CC experience stacks up here. I realize I am about to get flamed, but since, in my experience the students I get from the local community college are far less prepared for the rigors of my class than the students that take the prep courses in 4 yr schools, I have to believe at least in this case the CC has lower standards, and lower expectations, of their students. (I can point to specific instances, as required.)
PLEASE understand, I am not painting all CCs SLACs, or R1 schools with the same brush--just positing the different expectations that the cultures have result in potential conflicts as we have students move from one (sub) culture to another.
Anyway, just my 4 cents worth. (Inflation, you know.)
cc transfers have gpa's slightly (but not statistically significant) higher than students who begin at the state university. When cc transfers enter the state university, their gpa's dip, but by the time they graduate, they've caught up with "home grown" state university students. cc transfers complete their 4-year degrees slightly (but not statistically significant) more quickly than students who begin at the state university.
Thing 1: Our transfer students experience a drop in GPA of 1.0 - 0.5 grade points their first year after transfer - meaning, if they had a 3.0 at transfer, the next two semesters usually see them pulling straight C's. As noted in the previous comment, they eventually catch up but it is clear that when our transfers arrive, they are not ready/able to work hard enough to get the same grades they were getting in the past.
Thing 2: Our graduates have scores on the MCAT that are 10% lower than folks from our R1 system with equivalent GPAs
Both of these things could be blamed on grade inflation but there are a whole host of other explanations that I can think of (students work more after they transfer to our school because of higher tuition, the R1 students can afford prep courses). I think what both of these situations share is that the students, looking around at their cohort, feel that they are doing well and when they move on to a more competitive cohort, fall down. I think the only way to fix this would be to have normative standards for every institution and periodic articulation meetings where we really compared learning objectives across institions and "normalize" our grading standards using samples of student work. But I think that would be very hard on the students because if they came from a less competitive place (like my school) and only received poor grades (as perhaps they should given their performance on standardized tests) it would be massively demoralizing.
Since I know of one university where I have seen data that go back many decades (and not in the study, by the way) and show this effect, I have little doubt of its reality. That particular institution, like some in the study, had not changed its admissions standards (rather unselective) during that time period. What physicists call a "phase transition" (it almost looked like it was first order) occurred in the late 60s at that university.
In one case in his study, I suspect internal pressures. If you require a minimum grade of B in a certain course or courses to get into certain programs, you can expect pressure for higher grades. The B requirement itself might have resulted from grade inflation that made the C meaningless.
In others, where I know the institution has become MUCH more selective in that time period, just keeping existing standards will lead to grade inflation. In my own case, I have seen some grades go up because I am getting better students at my CC as a result of another institution becoming more selective.
Reminds me of a single class in CS grad school, dealing with Operating Systems implementation. After the semester started, it was explained to me that the professor liked to identify people who really got the material and succeeded As, and the rest Bs.
I think this usually works because the course is really demanding, entirely elective, and the subject area is finicky. The difference between getting 14 steps right and getting all 15 steps right is a non-booting OS. Enrollment for the class was very low.
I think I brought on a sort of crisis when I failed to get any of the projects working, but right out blew the final exam out of the water with a 94 percent. The next nearest grade was like a 68 percent. I clearly "got it" on paper, but the machine disagreed =(
However, there IS a way to pigeon-hole older adults!!
The word verification for this is 'nonsuc' - that cannot be a coincidence!
It's a lowering of standards--accepting lower quality work and calling it good.
Perhaps we should use analogy: it's getting a GM but calling it (and paying for) a Honda.
There is a culture shock at the amount of work, effort, and analysis required at the 4yr level from a cc student. In many ways, cc is viewed as an extension of high school. Here in lies the problem. We are teach children to "spit back" answers to get the school districts "Excellent Ratings" on standardized tests, which are tied to school funding. Now, because high school is not preparing students to critically analyze, synthesize, and apply knowledge, the expectations are lowered to accommodate the reality that is the typical student entering college. I don't think this is only happening at the cc level. I do know that as a student who was very adept at "regurgitating" knowledge (I graduated 7th with a 3.7 HS GPA), I struggled when hit with the college level expectations.
Note, however, that this is an explicitly competitive grading system, where (within a range) your grade depends not just on how well you know the material, but on how well you know the material as compared to how well other students know the material. Given that the function of LS grades is, AFAICT, to help employers select employees, this seems to be a fairly reasonable way to grade.
On the other hand, when I taught language classes as a TA, I would have been happy to give the entire class A's if they did equally well on the test, since the purpose of grades in those classes were, IMO, to demonstrate how well the students had mastered the material. As it was, of course, the class would be about 1/3 A's and high B's, 1/3 B's and low B's, and 1/3 C's and below. Since a significant part of the grade depended on homework and essays and similar projects for which some credit was almost automatically given, in practice you had to really work for D's and F's. Although of course some students did.
For example, when I took intro to physics at a CC in the late 1980s we used calculators on our exams. When my dad took a similar course at Michigan tech in the mid 1960s he had to use a slide rule. He also did fewer homework problem sets and the textbook was different. How could those two sets of grades be comparable?
The same goes with language teaching, literature, history and the rest of the humanities. In history we have word processors and library databases that dramatically alter the instructor's expectations and the skills students are required to master. And I would argue that those changes have been for the better and the students are better for it. You can't step into the same river twice, so comparing grades from different decades is meaningless.
Students were not better, "back in the day." The instructors were not 'more rigorous.' The standards were not higher. They were just different.