Monday, February 05, 2007
According to IHE, the governor of Texas is proposing mandatory tests for graduating college students, with state aid to the colleges tied to how well the graduates do.
It's one of those ideas that sounds smart for about ten seconds, until you actually start to think it through. The more you think about it, the dumber it gets. It's almost as if now that Molly Ivins has passed, Texas has declared it safe for bad ideas to roam free.
The admirable impulse behind it is to create incentives for colleges to teach well. Taken simply at that level, it's hard to object. But the method is screwy, and would almost certainly defeat the goal.
Who would be tested? If you only test seniors, then two-year schools are off the hook. If you test students in their final semesters, then two-year schools are automatically punished, since we would expect students to have learned less after two years of college than after four years of college. (If not, we'd have a pretty rough time justifying those last two years!)
Even if you're savvy enough to break the reference groups into two-year-degree and four-year-degree reference groups, selection bias remains an issue. The flagship university, with the toughest entrance standards, will almost certainly score the highest, since it had the best students walking in the door. It could lock its top students in a closet for four years and they'd score well. As a measure of the teaching performance of the university, it would be completely meaningless.
If a cc, with its open-door admissions policy, wanted to score well, it would have two options. Either achieve absolute miracles in the classroom – presumably, if they knew how to do that, they would have done it by now – or become attrition machines. Weed out students mercilessly, so that only the tippity-top students make it far enough to take the exams at all. (It's a variation on the Texas high school model of suspending the bad kids on test day, to look good on statewide exams.) This strategy would help with the scores, but at the expense of the fundamental mission of the college.
As with NCLB, there's a question of what's actually being measured. If you want to show the caliber of graduates, you don't usually have to do much more than look at the caliber of applicants. If you want to show the caliber of teaching, you need 'before' and 'after' scores to compare. Reward improvement, rather than absolute level. One could reasonably object that improving from 'semi-literate' to 'average' still doesn't scream 'college graduate,' but it certainly screams 'good teaching.' This would get around the issue of selection bias in admissions, though the elite schools might well object that if a student was already scoring in the 99th percentile, exactly how much higher can she go?
Why would we expect students to take these tests seriously? What's in it for them? If they don't take it seriously, the scores will be utterly meaningless. Expect kids to either blow off the exams completely, or to show up hung over and/or otherwise chemically altered.
Some fields have considerable consensus over the proper content of an undergraduate curriculum; some don't. This proposal assumes that all do. What would a statewide exam for graphic design majors look like? Marketing majors? Communications majors? Dance majors? Besides, don't we allow individual colleges to determine their own curricula? The Supreme Court said we do.
Even if you get around every issue I've listed so far, and you actually could come up with an accurate ranking of which colleges are doing the best and worst jobs of teaching, it's still not at all clear what to do with that information. Suppose, for example, an enterprising bureaucrat crunched some numbers and found a correlation between high percentages of adjunct faculty and low scores. Would the state of Texas suddenly pony up millions to expand the full-time faculty ranks? (Hint: no.) More fundamentally, wouldn't we expect low-scoring colleges to blame their low scores – rightly, wrongly, or (probably) both – on being underfunded in the first place? Rewarding selective colleges – those with the wealthiest applicants, basically – with even more funding, while starving the colleges with the most first-generation students, would merely amplify existing biases. Going the other way – targeting resources where they're most needed – would create perverse incentives. Either way, something is wrong.
Fraud, fraud, fraud. Every time there's a high-stakes single test, there's massive and widespread fraud.
Transfers. If a four-year college has a high proportion of cc grads transferring in, whose teaching is actually being measured? The same applies to any school with high percentages of transfer students.
Private colleges. Would they be exempt? If so, I'd expect to see some very weird consortial arrangements pop up as various public colleges try to game the system. If not, I'd expect to see some really wild constitutional issues. (A statewide standardized exam for theology majors? Yikes! Question 1: What is the one true faith?)
These are just the issues I could think of off the top of my head. I'm sure there are plenty more – racial bias, ESL (a HUGE issue in Texas), the usual critiques of standardized tests, interdisciplinary programs, and the fundamental fact that American higher education is the envy of the world while our K-12 system is widely perceived as a joke, raising the question of who should be imitating whom, etc. -- and many I've never even thought of. (I'll leave it to my wise and worldly readers to pile on stuff I haven't covered here.)
I hope the Texas legislature has the brains to think of at least a few of these, and to put this idea out to pasture. It has “disaster” written all over it.