Tuesday, February 23, 2010
Assessment as Marketing
The first is what I used to consider the basic definition: internal measures of outcomes, used to generate improvement over time. If you understand assessment in this way, then several things follow. You might not want all of it to be public, since the candid warts-and-all conversations that underlie real improvement simply wouldn't happen on the public record. You'd pay special attention to shortcomings, since that's where improvement is most needed. You'd want some depth of understanding, often favoring thicker explanations over thinner ones, since an overly reductive measure would defeat the purpose.
The second understanding is of assessment as a form of marketing. See how great we are! You should come here! The "you" in that last sentence could be prospective students being lured to a particular college, or it could be companies being lured to a particular state. If you understand assessment in this way, then several things follow. You'd want it to be as public as possible, since advertising works best when people see it. You'd pay special attention to strengths, rather than shortcomings. You'd downplay 'improvement,' since it implies an existing lack. And you'd want simplicity. When in doubt, go with the thinner explanation rather than the thicker one; you can't do a thick description in a thirty-second elevator pitch.
Each of these understandings is valid, in its way, but they often use the same words, with the result that people who should work together sometimes talk past each other.
If I'm a governor of an economically struggling state, I want easy measures with which I can lure prospective employers. Look how educated our workforce is! Look at what great colleges your kids could attend! I want outcomes that I can describe to non-experts, heavy on the positive.
And in many ways, there's nothing wrong with that. When TW and I bought our house, we looked at the various public measures of school district quality that we could find, and used them to rule out some towns as against others. We want our kids to attend schools that are worthy of them, and we make no apologies for that. They're great kids, and they deserve schools that will do right by them. I knew enough not to place too much stock on minor differences in the middle, but the low-end outliers were simply out of the question. I can concede all the issues with standardized testing, but a train wreck is a train wreck.
The issue comes when the two understandings crash into each other.
I'm happy to publicize our transfer rates, since they're great. But too much transparency in the early stages of improvement-driven assessment can kill it, leading to CYA behavior rather than candor. Basing staffing or funding decisions on assessment results, which sounds reasonable at first blush, can also lead to meaningful distortions. If a given department or program is lagging, would more resources solve it, or would it amount to throwing good money after bad? If a given program is succeeding, should it be rewarded, or should it be considered an area of relatively less need for the near future? (If you say both need more resources, your budget will be a shambles.) Whichever answer seems to open the money spigot is the answer you'll get from everybody once they figure it out.
Until we get some clarity on the different expectations of assessment, I don't see much hope for real progress. Faculty won't embrace what they see as extra work, especially if they believe -- correctly or not -- that the results could be used against them. Governors won't embrace what they see as evasive navel-gazing ("let's do portfolio assessment!") when what they really need is a couple juicy numbers to lure employers. And the public won't get what it really wants until it figures out what that is.
Most of my colleagues view assessment as a bureaucratic imposition from above. Most of them are receptive to improving their teaching, but they reject assessment as a path to that goal. They are pretty sure anything we put down on paper will be held against us at some future date.
Last decade in Ontario the provincial government* wanted to remake the educational system. In additional to ruthlessly centralizing control and taxation**, they decided to "create a crisis" — we have the Minister of Education on tape telling his staff that they would have to do that to change the system. (Which is why I find Naomi Klein's Shock Doctrine so believable).
When funding cuts and an expensive anti-teacher campaign didn't work, they started a program of standardized tests to prove that the kids didn't know much (and justify bringing in voucher schools and giving tax money to private religious schools, among other options). The first standardized math test had an apparently dismal result: 40% failure rate. What they didn't publicise is that they bought the test from another province, and that the material in the test wasn't taught until the following year in Ontario — so 60% of the kids figured out questions on mat they had never seen before while writing an unfamiliar exam, which to me says that they must have had a solid grasp of the fundamentals. But that wasn't the point: the test had "proved" that the system was broken and needed reforms.
*Now the same folks running the country, more's the pity.
**Formerly the province set curriculum and standards, and local boards set tax rates and ran schools. If you wanted extras (like health programs or better libraries) your municipality could pay for them, so some boards had more funding (from higher taxes). When they centralized taxation the took all the money and spread it around evenly, but left the unequal tax rates, so formerly education-friendly cities are giving money to towns that starved the schools.
The administration gets to demand assessment, and push the workload onto faculty. The faculty consider it phoney baloney and do the absolute minimum -- faculty routinely make up numbers just to make the paper-pushers happy.
As far as I can tell, the real driving force is accreditation. We're currently suffering from accreditors gone wild: our accreditors are ridiculously big on process. They don't care whether we do a good job at teaching our students as long as we document the process in great detail and follow their hundreds of prescriptions about the process to be followed. We have great outcomes for our students, but the accreditors are more interested in talking about the process we use and the paperwork we generate. And part of that focus on process is a requirement to perform some kind of assessment (apparently it doesn't really matter what you do as long as you do something that you can call assessment). So we all get forced into this BS, where everyone knows what is going on, and folks play along with the game with a wink and a nod (and, in some cases, gritted teeth) because it's what the accreditors demand.
And meanwhile, every second I spend on our bogus assessment is one second I can't spend on improving our courses for our students.
Can you tell that I'm bitter about the whole thing? Sigh.