Thursday, March 14, 2013


Friday Fragments

The obstacles aren’t trivial, but I drew hope from seeing Education Secretary Arne Duncan tweet approvingly a link to a story about Maryland K-12 schools adopting later start times.  

Knowing what we know about adolescent sleep cycles, the idea of forcing them to sit through pre-calc at 7:30 in the morning is self-defeating.  It’s setting everyone up to fail.  

Yes, moving school days later will impact after-school activities, whether they be sports, clubs, or jobs.  But that’s not necessarily a bad thing.  And if it means that students are actually awake enough to learn something, I’ll take that deal anytime.


A few decades ago, Piotr Sloterdijk defined cynicism as enlightened false consciousness.  It’s a sort of pose of wisdom that simply substitutes one form of illusion for another.  But its superior attitude actually blocks learning.

I was reminded of that in reading this piece in the Chronicle.  It’s a superficially clever piece about outcomes assessment, which tries to paint assessment as a sort of epistemological circle.  If grades don’t tell us what we need to know, the author asks, then how does assessment?


The answer is that assessment looks at a different thing to answer a different question.  Grades look at the individual parts (“courses”) of a curriculum.  Assessment looks at the curriculum as a whole.  Does the whole equal the sum of the parts, or is something missing?  

It’s a simple enough distinction.  I would have expected the Chronicle to know better.


The Boy did us proud.  We’re pretty strict about rationing “tech time,” which is our catchall term for time on computers, the kindle fire, or whatever.  The kids chafe at the limits, of course, but that’s to be expected.

Last week TB wrote us a two-page manifesto explaining -- clearly and logically -- why he should get more tech time, especially on weekends.  The piece was pointed but not angry, well-constructed, and pretty convincing.  For an eleven year old, I thought that was pretty good.

We met him halfway, giving him more time on weekends.  I’m thinking that if an eleven year old boy can sit down and write out a rational solution to what’s frustrating him, we should encourage that.  There are certainly worse ways for adolescent boys to handle frustration.

Apparently, I’ve passed on the blogger gene.  Poor kid.


This Dad wins the week.  In order to let his daughter play center stage, he hacked Donkey Kong so that the princess rescues Mario.  

From one Dad to another, I have to say: well played, sir.  Well played.

The answer is that assessment looks at a different thing to answer a different question. Grades look at the individual parts (“courses”) of a curriculum. Assessment looks at the curriculum as a whole. Does the whole equal the sum of the parts, or is something missing?

Bzzt. No. We often get asked to do assessment of learning outcomes in the context of a single course.

Now, maybe there's some Ideal Platonic Assessment Program that accomplishes exactly what its most enthusiastic proponents claim as they fill their veins with kool-aid. Great. But guess what? For those of us in the trenches, the Real Actual Assessment Reports that we are asked to produce are bureaucratic exercises that may be focused on (depending on the person asking) the entire campus, a single major, a track within the major, a sequence of courses, or a single course. These reports are demanded on short notice, and nobody cares much about whether they are meaningful.

The people who react with the greatest horror are actually not the cynics like me. They're people whose focus is pedagogical research. They are closer to the Ideal Platonic Assessment Effort than I am, and are much more favorable toward it. They are thus all the more upset when their deity is blasphemed by some bureaucratic exercise.

So, no, you don't get to respond to the actual, on-the-ground experience of people interacting with Real Live Assessment Efforts and say "No, no, that's not assessment." That is, in fact, assessment. It is what we are asked for, it is what we experience, it is what happens. If this isn't Assessment, then I guess we just haven't seen Real Communism.
As a mostly proud graduate of the Maryland public school system, I heartily approve of the start school later plan.

As a junior and senior in high school, I would get to school at 6:30 to be ready for my 7:17am 1st period class. As a band nerd, we would congregate in the band room and the ones working on homework were required to wake up the rest of us who had passed out for just a little more sleep. Classes ended at 1:55 and we had till 2:20 to get our stuff together and change clothes for band practice than ran until 5pm.

I got so sleep deprived I started taking nap when I got home which threw off my sleep cycle. My mom made me reset by staying awake super late one night. That was awful. I feel for kids now. I don't think I could handle that schedule anymore. And add to that a couple AP classes, other band stuff, a PT job on the weekends and family obligations.

I say naps, naps for all.
Double dittos on ending 7:30 classes for teens. My school district did it to reduce transportation costs, but the loss of learning was huge.
I understand that the philosophy behind outcomes assessment is that it attempts to determine how well the entire curriculum meets the educational goals that the institution supposedly supports. These goals are most often written as a set of very high-level rubrics, examples being things like the ability to make and see connections, the ability to do critical thinking, the ability to do research, etc. These rubrics look very good on a bumper sticker or when touted before funding agencies or accrediting bureaus. The goals of the outcomes assessment are indeed worthy goals—does your curriculum help your students to achieve these objectives, and if not, what needs to be changed or fixed in order to meet these objectives.

But how do you actually measure success in achieving these high-level rubrics? As physicists often say, if you can’t measure it, it doesn’t mean anything. In particular, how do you measure how well a particular course satisfies these high-level rubrics? For example, I teach an introductory physics class at Proprietary Art School, and the syllabus says that the objective of the course is to teach students how to solve simple problems in Newtonian mechanics. The exams are designed to see how well the students are able to do this, and one might think at first sight that the grades the students obtain on these exams would be an adequate gauge of how well they have met the objectives of the course.

But this is not good enough. I must now also show how well the students have met these high-level rubrics. How do I design some sort of test or exam that shows, for example, how well students see connections, how well they do critical thinking, etc? I suppose that this could be done but it would not be easy. Even if it could be done, would the results actually mean anything?

I remember back in my undergraduate days, a friend of mine told the physics prof that he could follow the individual steps in a derivation that he was putting on the blackboard, but that he had a hard time in seeing how it connected to other things and he also had a problem seeing where the derivation was headed. The prof responded that if he could actually do these things, he would be a rare bird indeed and could be a potential Nobel Prize winner.

Perhaps outcomes assessment should be completely decoupled from individual courses. Back in my undergraduate days, the physics department gave a comprehensive exam (lasting 3 days) to all their physics majors in their senior year. The exam was designed to measure how well the students had mastered the material in all their physics courses, all the way from freshman level to senior level . The exam results did not affect anyone’s final grade, but was designed by the faculty to measure how well their students had done in mastering the material. It was intended to assess how well their curriculum was working and to determine if there were any deficiencies in the program or if there were things that needed to be fixed.

The exam was not really for the students—it was for the faculty. Maybe outcomes assessment should be done this way.

The biggest flaw in the Chronicle essay is that the author seems unaware that "outcomes-assessment assessment" is the most important part of the assessment process. We are expected to close the loop by doing exactly that. Further, the next cycle happens the next year, and the year after that for the next decade or three. This isn't a one-year game, AFAIK.

I also wonder if this is still theoretical for Dean Dad. The assessment is done for each specific course, with course-level outcomes that are each aligned with department and then college (and university) outcomes. I assume the aggregation process consists of blindly shoveling numbers into a giant spreadsheet, making that level rather pointless to the faculty. At that level, it appears to ignore the distinction between someone who passed the class and failed to achieve some outcome(s) and someone who failed the course and missed most (but not all) of the outcomes. They all end up in the same steaming heap of data.

At the course level, I have found it to be interesting and sometimes enlightening. It is, after all, possible to do well in a class while missing one particular topic in its entirety. It is important to see if there is a knowledge gap that gets hidden behind the grades. I can, and should, look at that even if the final result is that they all meet 80% of the course-level outcomes in a particular area and get a gold star in the spreadsheet.

My main objection to the process is similar to what ArtMathProf articulate -- the best assessment of course outcomes is whether a student can apply what they learned in the next class, not on a regular or artificial evaluation that is part of the current class. For example, a trig class fails if students can't work out vectors in physics or don't know the functional properties needed in calculus regardless of what they knew when they took the final exam. Similarly, the only real test of the curricular outcomes for an AA student is how they do after transfer, while an AS or BA graduate should be assessed by their employer or (if relevant) grad school.
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?