I’m always fascinated when the whole doesn’t equal the sum of its parts. It’s the kind of finding that suggests new questions.
I had two of those on Tuesday.
The first was a discussion on campus of the difference between the “course mapping” version of outcomes assessment and the “capstone” version. Briefly, the first version implies locating each desired student outcome in a given class -- “written communication” in “English composition,” say -- and then demonstrating that each class achieves its role. The idea is that if a student fulfills the various distribution requirements, and each requirement is tied to a given outcome, then the student will have achieved the outcomes by the end of the degree.
Except that it doesn’t always work that way. Those of us who teach (or taught) in disciplines outside of English have had the repeated experience of getting horrible papers from students who passed -- and even did well in -- freshman comp. For whatever reason, the skill that the requirement was supposed to impart somehow didn’t carry over. Given that the purpose of “general education” is precisely to carry over, the ubiquity of that experience suggests a flaw in the model. The whole doesn’t necessarily equal the sum of the parts.
In a “capstone” model, students in an end-of-sequence course do work that gets assessed against the desired overall outcomes. Can the student in the 200 level history class write a paper showing reasonable command of sources? The capstone approach recognizes that the point of an education isn’t the serial checking of boxes, but the acquisition and refinement of skills and knowledge that can transfer beyond their original source.
The second instance was reading this piece from the Chronicle about the “online achievement paradox.” The paradox is that pass rates in online classes are generally about ten points lower than in classroom courses, but that students who take at least some online courses graduate at higher rates than students who don’t. Given that degrees require passing classes, the result is counterintuitive.
The article struggles to explain causes, to its credit. I’d guess that student demographics play a significant role. In the settings with which I’m familiar, students in online classes skew older, whiter, and more female than students in classroom courses. (Last week’s column about three high school students was consistent with that: none of them had the slightest interest in going online.) The effects of “online” would need to be disentangled from the effects of race, class, and gender to get a good reading. If the demographics of the two formats were the same, would the paradox still hold?
Maybe it would, at least in part. To the extent that it does, we have a really good research question. Off the top of my head, I’d love to see a study that compares different mixes of onsite and online to find the “optimal” mix for graduation rates.
Much faculty resistance to outcomes assessment, I think, comes from an intuition that breaking the whole of a course into component parts does violence to its substance. There’s some truth to that, but it’s hard to prove in the absence of some sort of assessment, which is a paradox in itself. Some folks will try to escape the paradox by positing something “ineffable,” but in a world of limited resources, “ineffable” isn’t a terribly persuasive argument. I see the word as a placeholder. It says “if I had an argument, it would go here.” That doesn’t mean the position is false, necessarily, but it’s based in a faith that can’t be assumed.
I’m hoping to make some progress this year in moving from an exclusive reliance on sum-of-its-parts assessment towards something better geared to capture the whole picture. In the meantime, though, I’m fascinated by the online paradox. Has anyone seen good research on that? Is there another explanation I’m missing?