Tuesday, July 05, 2016


The Whole and the Sum of Its Parts

I’m always fascinated when the whole doesn’t equal the sum of its parts.  It’s the kind of finding that suggests new questions.

I had two of those on Tuesday.

The first was a discussion on campus of the difference between the “course mapping” version of outcomes assessment and the “capstone” version.  Briefly, the first version implies locating each desired student outcome in a given class -- “written communication” in “English composition,” say -- and then demonstrating that each class achieves its role.  The idea is that if a student fulfills the various distribution requirements, and each requirement is tied to a given outcome, then the student will have achieved the outcomes by the end of the degree.

Except that it doesn’t always work that way.  Those of us who teach (or taught) in disciplines outside of English have had the repeated experience of getting horrible papers from students who passed -- and even did well in -- freshman comp.  For whatever reason, the skill that the requirement was supposed to impart somehow didn’t carry over.  Given that the purpose of “general education” is precisely to carry over, the ubiquity of that experience suggests a flaw in the model.  The whole doesn’t necessarily equal the sum of the parts.

In a “capstone” model, students in an end-of-sequence course do work that gets assessed against the desired overall outcomes.  Can the student in the 200 level history class write a paper showing reasonable command of sources?  The capstone approach recognizes that the point of an education isn’t the serial checking of boxes, but the acquisition and refinement of skills and knowledge that can transfer beyond their original source.  

The second instance was reading this piece from the Chronicle about the “online achievement paradox.”  The paradox is that pass rates in online classes are generally about ten points lower than in classroom courses, but that students who take at least some online courses graduate at higher rates than students who don’t.  Given that degrees require passing classes, the result is counterintuitive.

The article struggles to explain causes, to its credit.  I’d guess that student demographics play a significant role.  In the settings with which I’m familiar, students in online classes skew older, whiter, and more female than students in classroom courses.  (Last week’s column about three high school students was consistent with that: none of them had the slightest interest in going online.)  The effects of “online” would need to be disentangled from the effects of race, class, and gender to get a good reading.  If the demographics of the two formats were the same, would the paradox still hold?

Maybe it would, at least in part.  To the extent that it does, we have a really good research question.  Off the top of my head, I’d love to see a study that compares different mixes of onsite and online to find the “optimal” mix for graduation rates.  

Much faculty resistance to outcomes assessment, I think, comes from an intuition that breaking the whole of a course into component parts does violence to its substance.  There’s some truth to that, but it’s hard to prove in the absence of some sort of assessment, which is a paradox in itself.  Some folks will try to escape the paradox by positing something “ineffable,” but in a world of limited resources, “ineffable” isn’t a terribly persuasive argument.  I see the word as a placeholder.  It says “if I had an argument, it would go here.”  That doesn’t mean the position is false, necessarily, but it’s based in a faith that can’t be assumed.

I’m hoping to make some progress this year in moving from an exclusive reliance on sum-of-its-parts assessment towards something better geared to capture the whole picture.  In the meantime, though, I’m fascinated by the online paradox.  Has anyone seen good research on that?  Is there another explanation I’m missing?

And you haven't ever had to read lab reports! (Gas Station without Pumps recently blogged about that problem in his engineering lab class.) Florid prose and technical writing are not compatible, although that combo can be entertaining. But what I can't comprehend (in two senses of the term) are students who never use a paragraph break in a page or two of analysis.

On your first point, our humanities classes also assess writing separate from content (two distinct outcomes) so it is a blend of capstone and mapping. In principle, that could allow for a deeper look at the program if you had the racking data to see if students of one professor were more likely to retain the skill. (One problem is that some students work at actively forgetting what they learn each semester.) On the science/applications-of-math side of the fence, we fight like crazy to convince them that this behavior is not good for their career path.

On your second point, could it be that the on-line classes see the phenomenon where a C- student who worked like crazy to pass has better retention than the A student with a great short-term memory? Maybe the lower pass rate reflects that they have to learn how to teach themselves, which is harder at first but can lead to later success.

Finally, what I like about outcomes assessment is that it gives me an independent look at specific key parts of the course. Exam grades and passing rates can hide the fact that students who pass the exams and the course are very weak on one particular skill or topic.
The article suggested flexibility was a reason for success. If a student can't do (or can't repeat) an offering, s/he can't graduate.

The analogy from WAC to understand writing is that it's a lot like a class in ball handling. Just try to take a class in generic ball handling and see how that works for you in later golf, basketball, and soccer games. Only some things are relevant; it's not merely the discipline teaching the writing but also the genres required. For example, the "how" of using sources isn't universal.
Very attractive information.great to know about this new methods.The points are really motivating for the readers.Great and useful article.
SEO training in chennai adyar
Your thinking toward the respective issue is awesome also the idea behind the blog is very interesting which would bring a new evolution in respective field. Thanks for sharing.

Web Designing Training in Chennai Adyar
Really, these quotes are the holistic approach towards mindfulness. In fact, all of your posts are. Proudly saying I’m getting fruitfulness out of it what you write and share. Thank you so much to both of you.

J2ee Training in Chennai Adyar

We see that all the time in physics — students learn a physics concept as 'something for class', but revert of their understanding of 'how the world really is' when solving a practical problem. Redish and Mazur have both published a lot on this, if you want to do some reading. (CCPhysicist can probably recommend specific papers.) On in particular that struck my eye was this one:

Making Meaning with Math in Physics:
A Semantic Analysis
Edward F. Redish and Ayush Gupta Department of Physics, University of Maryland, College Park, MD 20742-4111 USA [redish@umd.edu; Ayush@umd.edu]

Physics makes powerful use of mathematics, yet the way this use is made is often poorly understood. Professionals closely integrate their mathematical symbology with physical meaning, resulting in a powerful and productive structure. But because of the way the cognitive system builds expertise through binding, experts may have difficulty in unpacking their well-established knowledge in order to understand the difficulties novice students have in learning their subject. This is particularly evident in subjects in which the students are learning to use mathematics to which they have previously been exposed in math classes in complex new ways. In this paper, we propose that some of this unpacking can be facilitated by adopting ideas and methods developed in the field of cognitive semantics, a sub-branch of linguistics devoted to understanding how meaning is associated with language.

TLDR: students can do math in math class, but that doesn't transfer the way we expect because they are lacking a lot of implicit knowledge — which is the kind of thing that doesn't show up well on the outcomes assessments I've seen.
My thinking on the successful online course completion -> higher graduation rates is that people who can successfully pass an online course are highly self motivated. Given all the distractions that keep people from completing online courses, these are the students that want it bad enough that they make the time. Of course demographics will play into that (like it does everything) since more privileged folks will have less distractions overall, but I bet if you could compare within demographic chunks, self motivation is what distinguishes between online course passers, and then those folks are also more motivated to pass their regular courses.
Self motivation is what I was talking about, but there are self-motivated students who have never learned that reading the textbook can help you pass a class! I'm less sure about a demographic tie to distractions, because well-off students might be more likely to replace time working with time on facebook. I wish there was a way to measure that!

Nice choice for this blog, because it touches on the problem of writing in classes that are not English class, but the problem they address has worsened signficantly in the last 20 years. Because all of our upper-level math classes teach how to use a specific graphing calculator, all of their problems are restricted to y as a function of x -- because that is all the calculator can do. There is no use of symbols more appropriate to a problem, like the pseudo problems where N was nickels and D was dimes. Plotting x as a function of t is crazy talk. (I use y versus t early for that very reason, but also to reverse the down=plus convention used in their math textbook examples.) If they haven't had physics before chemistry, they can freak out when they see the log of a reaction rate plotted versus the reciprocal of temperature. The chemists have to convert that into y versus x.

Although I agree on the need to construct meaning, the problem I mention is deeper than that. They don't remember math from semester to semester in math classes. Pre-calc starts with logs, which they "learned" the semester before and confronted on the final exam just a few weeks earlier. (It is even assessed as an outcome.) Yet many start pre-calc with zero memory of having seen logs at all. Perhaps they see it as math they will never use, or this is just a carry over from the HS expectation that it is all taught again the next year. Several years ago I blogged about what I describe as a failure to understand the CONCEPT of prerequisties. That article is linked from the third paragraph of the following one:

The concept of "transferable skills" is wishful thinking, in my experience.

A student finishes one semester, and starts a new class in a new subject the next semester. The whole context is different, so nothing you learned last semester applies. It is similar to the way you can get up from your chair, go into another room to fetch something, and forget what it was you came for. The context has changed.

Also, many students do not want to know half (or more) of the stuff they are taught. They will learn just enough to pass the assessment, and then wipe their memory banks.

The only thing that can help is to make the various stages of the course more continuous. A single theme or collection of skills should be studied right through the three years. Courses tend to be too blocky.

Don Cox
Scheduling was the most trivial explanation that came to mind. Since you noted some conversations with students that indicated most of them had taken online classes, but not due to preference but scheduling issues, at least some of the explanation might be that. You can't graduate if you can't get the courses you need, and anything that gets people through in a more streamlined fashion might help graduation rates.

Another possibility is that online courses hone persistence, or on a related note that they impact motivation differently. For example, if you fail an online course you might be more willing to retake it in person with less emotional baggage compared to failing it a traditional class (I first took calc online and it was dismal, I didn't complete it. I later took it in a "small group learning section" and it was fine).

I've blogged on outcomes assessment (with an example from my department) at https://gasstationwithoutpumps.wordpress.com/2016/07/06/outcomes-assessment/
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?