Monday, February 05, 2007
According to IHE, the governor of Texas is proposing mandatory tests for graduating college students, with state aid to the colleges tied to how well the graduates do.
It's one of those ideas that sounds smart for about ten seconds, until you actually start to think it through. The more you think about it, the dumber it gets. It's almost as if now that Molly Ivins has passed, Texas has declared it safe for bad ideas to roam free.
The admirable impulse behind it is to create incentives for colleges to teach well. Taken simply at that level, it's hard to object. But the method is screwy, and would almost certainly defeat the goal.
Who would be tested? If you only test seniors, then two-year schools are off the hook. If you test students in their final semesters, then two-year schools are automatically punished, since we would expect students to have learned less after two years of college than after four years of college. (If not, we'd have a pretty rough time justifying those last two years!)
Even if you're savvy enough to break the reference groups into two-year-degree and four-year-degree reference groups, selection bias remains an issue. The flagship university, with the toughest entrance standards, will almost certainly score the highest, since it had the best students walking in the door. It could lock its top students in a closet for four years and they'd score well. As a measure of the teaching performance of the university, it would be completely meaningless.
If a cc, with its open-door admissions policy, wanted to score well, it would have two options. Either achieve absolute miracles in the classroom – presumably, if they knew how to do that, they would have done it by now – or become attrition machines. Weed out students mercilessly, so that only the tippity-top students make it far enough to take the exams at all. (It's a variation on the Texas high school model of suspending the bad kids on test day, to look good on statewide exams.) This strategy would help with the scores, but at the expense of the fundamental mission of the college.
As with NCLB, there's a question of what's actually being measured. If you want to show the caliber of graduates, you don't usually have to do much more than look at the caliber of applicants. If you want to show the caliber of teaching, you need 'before' and 'after' scores to compare. Reward improvement, rather than absolute level. One could reasonably object that improving from 'semi-literate' to 'average' still doesn't scream 'college graduate,' but it certainly screams 'good teaching.' This would get around the issue of selection bias in admissions, though the elite schools might well object that if a student was already scoring in the 99th percentile, exactly how much higher can she go?
Why would we expect students to take these tests seriously? What's in it for them? If they don't take it seriously, the scores will be utterly meaningless. Expect kids to either blow off the exams completely, or to show up hung over and/or otherwise chemically altered.
Some fields have considerable consensus over the proper content of an undergraduate curriculum; some don't. This proposal assumes that all do. What would a statewide exam for graphic design majors look like? Marketing majors? Communications majors? Dance majors? Besides, don't we allow individual colleges to determine their own curricula? The Supreme Court said we do.
Even if you get around every issue I've listed so far, and you actually could come up with an accurate ranking of which colleges are doing the best and worst jobs of teaching, it's still not at all clear what to do with that information. Suppose, for example, an enterprising bureaucrat crunched some numbers and found a correlation between high percentages of adjunct faculty and low scores. Would the state of Texas suddenly pony up millions to expand the full-time faculty ranks? (Hint: no.) More fundamentally, wouldn't we expect low-scoring colleges to blame their low scores – rightly, wrongly, or (probably) both – on being underfunded in the first place? Rewarding selective colleges – those with the wealthiest applicants, basically – with even more funding, while starving the colleges with the most first-generation students, would merely amplify existing biases. Going the other way – targeting resources where they're most needed – would create perverse incentives. Either way, something is wrong.
Fraud, fraud, fraud. Every time there's a high-stakes single test, there's massive and widespread fraud.
Transfers. If a four-year college has a high proportion of cc grads transferring in, whose teaching is actually being measured? The same applies to any school with high percentages of transfer students.
Private colleges. Would they be exempt? If so, I'd expect to see some very weird consortial arrangements pop up as various public colleges try to game the system. If not, I'd expect to see some really wild constitutional issues. (A statewide standardized exam for theology majors? Yikes! Question 1: What is the one true faith?)
These are just the issues I could think of off the top of my head. I'm sure there are plenty more – racial bias, ESL (a HUGE issue in Texas), the usual critiques of standardized tests, interdisciplinary programs, and the fundamental fact that American higher education is the envy of the world while our K-12 system is widely perceived as a joke, raising the question of who should be imitating whom, etc. -- and many I've never even thought of. (I'll leave it to my wise and worldly readers to pile on stuff I haven't covered here.)
I hope the Texas legislature has the brains to think of at least a few of these, and to put this idea out to pasture. It has “disaster” written all over it.
(2) Florida has had a test of "college level" skills for decades that must be passed by any student who wants to matriculate past the sophomore year. [Hence it was originally taken by everyone getting an AA and everyone at a Uni, including athletes.] Reason for it was that a lawyer in the legislature got fed up with hiring new uni grads for his office who could not write a sentence. This test enforces a skill level that I would put at about the 11th grade level in both math and english across all majors.
This requirement has been watered down over the years so that students can exempt the exam by getting a 2.5 in designated courses in those areas (e.g. EN101 and EN102). The latest change is to drop the requirement that ed majors pass the written exam even if they make the exemption score, because the requirement was hurting retention among teaching majors across the state.
AFAIK, it was never a factor in funding, and (as DD anticipates) it could be gamed by going to a private college ... just as HS students in Florida transfer to a private HS if they find they cannot graduate from a public school due to their low FCAT scores.
Measuring the presence of critical thinking is certainly possible. If you can't quantify it, how do you assign grades? (answer: "I use a rubric to group student work into A's B's etc.")
At my institution, our new program review protocol requires us to do just such an assessment. The "tests" we're using to assess student growth are home-grown and part of our existing program. Our department is trying to take the perspective that this is a chance to learn about our students and perhaps figure out how to improve our teaching.
The state has no way to enforce exit exams on the private schools. No state money, no state accreditation = no leverage. The state doesn't support community colleges here, either. They are strictly county. The only two-year state schools are senior colleges that offer the last two years as a counterpart to junior (community) colleges that offer the first two years.
As I was reading through your post all I was thinking was "test fraud, just like TAKS in K-12" so I'm glad you hit that point. In addition to scores of kids being classified as special ed or ESL right before test day, Texas also has a large number of low-income high schools with, gasp, NO dropouts. At all! Isn't that amazing? Yeah, we got the fraud down.
I expect this to be another bad idea that dies on the vine due to, if nothing else, lack of funding.
-I'm accounting as fast as I can
Dear Learn'd Astronomer,
Sciences, meet Humanities. By our very nature, we traffic in the unknowable. Do we pass out grades? Sure. Should there be standards? Without question. The question is: who determines the rubric?
Every semester I encounter students who are passive, apathetic, blind to a world beyond cable TV, and downright delusional about the vagaries of the housing market, the limitations of a college degree, the extent to which Madison Avenue has made them indentured servants. It's safe to say students crowding my side of the campus differ from those enrolled in your physics course. Where is their sense of wonder? Curiosity? Intellectual rebellion that goes beyond mere posturing? Ask the HS teacher forced to teach to a test for the last three years.
The HS teacher should not bear all the blame, and these students will/may certainly evolve in four years. But what happens when profs are forced to account for the arbitrary precepts of a politician? Academic freedom is not just for Trotskyites. All profs need the freedom to adapt to the protean challenges of the modern college classroom. I say, butt out.
I submit that the governor of Florida has little desire to make the sunshine state a bastion of Philosopher Kings. He is a politician pandering to the lowest common denominator and using numbers (which are ultimately symbolic and orphic) to make his case.
Who will design this rubric for an entire state? A politician? A bureaucrat? English Profs? (Yikes!) What shall deemed worthwhile: facts or modes of inquiry? Do we take into account that some darlings of critical theory may be mocked tomorrow? How do we account for improvement over four years? As DD states, where were they intellectually as freshman? Can you factor that in your equation?
Doctors. Nurses. Attorneys. Engineers. By all means, test them. Credentials are quite relevant. Politicans? That test comes every few years. I have my doubts on their pass rate on this one.
Additionally, we have an argument put forward that if we actually hold schools with open/generous admissions to a standard, then they either will be expected to perform "miracles" or else become "attrition machines."
I take the second assertion as an admission, in part, of defeat. If students from such august institutions will be (en masse) unable to pass such exams without "miracles", should we conclude that these schools are simply "giving away" degrees?
By what measures should we determine these schools are "successful?" (or should we simply take the administration's word for it?)
It's fine to have a debate over whether we should have CCs offering education to people who probably aren't going to be able to transfer into a four year degree, but that is an orthogonal debate.
I never claimed that college is about ineffable critical thinking. (That's pm's view, not mine.) I claimed that colleges have lots of different programs/majors, and that different programs/majors develop different competencies. Which means you'd either have to come up statewide exams for each of the hundreds of different majors out there, or go to lowest-common-denominator skills. The former strikes me as an albatross, and the latter as defeating the purpose.
Am I admitting defeat in my line about attrition machines? I don't think so -- I'm saying that "the best of the best" will beat "the median of the entire population." The only way around that would be for cc's to limit testing to the best of the best, which, again, would defeat the purpose.
pseudonymous' last question, though, is both trenchant and difficult. How do we measure success, if not by standardized tests? It's a toughie, and one that I would say hasn't been satisfactorily answered from Harvard on down. Job placement rates tell you as much about market fluctuations as they do about teaching or learning. Grad school/Med school admissions rates are based, at least in part, on reputation.
I fully agree that disciplines with Board exams lend themselves to easy measurement, and my cc is justly proud of its pass rates in those areas. But those exist only in a few areas. In disciplines in which consensus is lacking, I don't see that happening.
I agree when you write "Comparing the two sets of institutions and rewarding the second set punishes the first set for trying to offer education to a broader crowd."
Are we aware of how the testing would be realized in implementation, or perhaps have certain assumptions been made in this rather quick and dirty analysis? Assuming that DD is correct, and that the plan is to compare all schools without discrimination to mission, it seems to me that we have an opportunity here.
Perhaps we should take the opportunity to influence the policy makers and improve the process, and help them understand how they can use this structure to make apple-apple and orange-orange comparisons, rather than assume they will only draw the apple/orange comparison.
Okay, enough fruit analogies. I see here two major issues. The first is providing a way of assessing if the various state-funded institutions are meeting the mandates given to those institutions. If we use the exam results as a means of comparing like-chartered schools, then we shouldn't have a problem--right? Surely there is no problem with using appropriately developed measures to assess if the Univ/Colleges are meeting the missions for which they receive funds!
The second problem is perhaps orthogonal, but since DD hinted at it here, it is fair game. Are CC's actually achieving their mission of educating their students? And if they are, how can cross-CC comparisons be made to allow for the rational allocation of state resources (since the "state" here used to mean governmental agency, provides the funding). I only target CC's specifically since DD, a proponent of CCs, seems to believe that miracles must take place for his students/graduates to "do well" in such an exam.
The Texas plan makes sense from an outside perspective and looks like madness from the inside. To the greater public, holding accountable the institutions that take in huge amounts of tax money and are supposed to educate the youth is a no-brainer. To the schools themselves, it's nuts for the reasons DD gives. ("We're going to punish you for reaching out to the community and trying to educate non-elite students! Silly school, you should stop that!")
Ugh. What a mess.
Actually, I don't think these two are so contradictory--as you point out, if you were to define success in terms of apples to apples.
What makes a CC successful? What makes a 4 yr LA successful? Flagship U? I am actually quite curious about the answer to this set of questions...
I would think in general they all must have as a base some 'educational' component. Students who enter should be transformed in some way, upon departure. Yes, when you cannot control for the quality of the product entering the system, then there must be some acknowledgment that there will be an impact on the output.
As for the comment "I submit that the governor of Florida has little desire to make the sunshine state a bastion of Philosopher Kings." directed at someone else, let me assure you that the test I referred to predates our ex-Gov Bush. And it would be unfair to the Democratic State Senator who started it, and the multitude of Governors and others who supported it, to fail to acknowledge that our grads today read and write and compute better than they did in 1980. A test that had consequences for the student but not for the institution had positive effects. There is plenty of improvement still needed, as well. My main point was to comment on how Reality eventually diluted its role, as DD anticipated.
I've run on too long already, but there is a difference between a test that is diagnostic and leads to remediation in cases where a passing grade does not equate to long-term learning of a key skill, and a test that is used for political (usually punitive) purposes.
Case in point: the TASP and the laws pertaining to students being in constant remediation until they passed all three sections of this assessment of basic skills. They finally did away with that one 'round about 2003 (well, replaced it with something very like it, actually), but up to that point, developmental education students were caught in a vicious cycle of testing, retesting, and testing again, taking their entire cycle of developmental education courses, still failing the test, then taking a non-course based developmental education program, and then still failing the test, then taking that non-course based program over and over and over again in order to mark time, keeping the students eligible for financial aid and enrollment and preparing them to take the test yet again to fail it yet again.
While students were waiting in academic limbo to pass all three sections of the test (with math usually being the stumper), students inevitably figured out the "catch." Private colleges and universities in Texas didn't have to abide by this rule. Their students didn't have to take and/or pass the TASP, so community college students would accumulate enough hours to transfer, apply to the private school of their choice, and off they would go, having never achieved college-level proficiency in either reading, writing, or (most often) mathematics.
In the end, what were we measuring? Was the TASP truly a measure of college readiness? No. Students could pass it, enter into college-level courses, and still be entirely unprepared for success in those classes. It was an imperfect instrument. Yet that instrument cost the state tens of millions of dollars each year.
Again, just because it was a bad idea didn't stop the state from putting it in place. I should note that when Bush Jr. ran for prez the first time around, he ran his campaign (in part) on the "miracle" he had performed on the educational system in Texas. Oh, it was a miracle alright. It's a miracle it's still functional at all.
I submit that it should be the obligation of the DD's of this world to propose the right measures by which outsiders can gauge the success of their students. "Trust us..." is not a viable response.
A private college can give out useless degrees because the customers pay and take their chances. But CCs and state schools rely upon the generosity of strangers' taxes. They not only should show that students learn something, but they should also have to justify the existence of entire departments.
Don't like it? Don't ask for state money.
This isn't about not supporting education. It's about not letting educators go without accountability. After all, professors are also evaluated for tenure and promotion on a variety of objective and subjective measures which are highly imperfect.
I submit that the unwillingness of many profs to submit to outside evaluation, coupled with rampant grade inflation in some subjects guarantees still less support for higher education in the future and an even more vicious corporate mentality in our institutions of higher learning.
Dear DD ---- Don't assume that the legislature will be able to figure this out on their own; you have written a great, persuasive explanation of the headaches they will have to deal with should they pass it. Please mail it off to some of the more sensible of the members or get someone in their neighborhood to write in a version of this as a letter to the editor. By blogging you already are behaving as a public intellectual; I urge you to take that another step further. Even if you are not a Texas resident, you will be doing educators all over the nation a service if you help to squash a terrible idea.
Best homage to Molly Ivins, ever. Thanks!
sophisticated gadgets to put a smile on our faces and that of others. We are sending text messages that are really humorous
and listening to ring tones that would tickle our funny bones, every time the phone rings.
At present, a multitude of ring tones are easily available and can be downloaded in different models of mobile phone
handsets. Mobile phone users can choose from monophonic ring tones, polyphonic ring tones, true tones, real tones, SMS ring
tones, buddy name tones, caller id tunes, etc., according to their specific requirements. With the technological advances
achieved in this sector, a high degree of customization of ring tones has also become possible. Users of mobile phones can
give vent to their creativity and design some witty and humorous ring tones in many of the latest models of handsets. For
instance, users are free to use their own voices or any other sound to create a number of personalized ring tones! Depending
on their intelligence, wit and comic timing, they would be able to create some hilarious ring tones that are guaranteed to
put a smile on the lips of anyone who happens to hear them.