Monday, August 21, 2006

 

A Quart of Liberty

At my previous college, I was one of the very few liberal-arts types to make it into administration. That, and my selective pickiness about grammar, made me the go-to guy for administrators when there was a grammatical or syntactical issue. I remember the then-registrar calling me from out of the blue to ask what an “abstract noun” was. I gave the example of liberty. It exists, certainly, but it's not like you can measure a quart of liberty.

The recent move to impose national standardized tests on higher ed strikes me as trying to package liberty in quarts.

An alert reader sent me a link to this article from the Washington Monthly about measuring student learning at various colleges. Taking a consumerist view as given, the article blames complacent pointy-headed intellectuals for trying to hide their rent-seeking behavior behind an ink cloud of evasions. If we were to subject colleges to rigorous measures of student learning, the article claims, soon the last would be first, the smug ivy leaguers put in their place, costs cut, and enlightenment achieved across the land.

Well, no.

A few top-of-my-head objections:

- If you're measuring outcomes at graduation, you're largely measuring selectivity of admissions. Is it more admirable to take a kid from 'remedial' to 'educated,' or to take a kid who's already brilliant and simply not mess her up? If you only look at outputs, you're largely measuring inputs. Lo and behold, colleges that admit lots of smart rich kids from tony prep schools will do the best job. You heard it here first.

- Different majors build different skills. A chemical engineering major will probably develop a different skill set than a drama major. (If not, at least one of those programs is doing something wrong.) Getting around the wild diversity of majors would require focusing on 'lowest common denominator' skills, much as our K-12 system does now. This strikes me as backwards. Our higher ed system is the envy of the world. Our K-12 system is, at best, limping. If anything, the emulation should go the other way 'round.

- Is there really a consensus on what students should learn in college? Did we have that conversation when I was in the bathroom? I don't remember that. Should we teach students how to make money, how to be critical citizens, or how to find themselves? Should we measure success by starting salaries, philosophical sophistication, sensitivity to diversity, public speaking skills, or math tests? Before we figure out how to measure, shouldn't we first figure out what to measure?

- (A quick aside on starting salaries: these fluctuate far more than the quality of education delivered. For example, IT grads in 1999 could write their own tickets. In 2003, they couldn't get arrested. Neither has much to do with the quality of instruction in college IT programs.)

- Doesn't the GRE already exist? I mean, if we really just want to look at lowest-common-denominator skills, isn't the instrument already out there? Just set a few scores across the categories as graduation requirements, and presto! No Undergraduate Left Behind.

- Since when did we all agree that educating undergraduates is the sole purpose of American higher education? I didn't get that memo. Research universities perform important roles in pursuing breakthroughs in many key fields of human endeavor. Community colleges help displaced workers retrain for other careers, sometimes eschewing academic degrees for vocational certificates. As different as they are, both of these functions serve the public. Are we suddenly to just discount or ignore these functions? If so, why?

- Since when did we all agree that students all want the same thing? Simply put, they don't. Some want to learn everything they can in a given field. Some want to learn enough to get a job that pays well, but no more than that. Some see college as primarily a social experience with a secondary credentialing function; classes are, at best, an ancillary nuisance. The best party school may not be the best teaching school, which may not be the best research school or football school or whatever else.

- How would religiously-affiliated colleges fit with this? If a college sets 'leading a spiritual life' as one of its primary missions, how do we measure that? (“Students at BYU are 15% more spiritual than students at LSU.”) Given the prominence of religiously-affiliated colleges and universities in America, and the diversity of expressions of faith, this is not a trivial concern.

Besides, you'd have to be living in a cave not to discern that the real agenda behind this movement is cost-cutting. It's punitive, and would be executed accordingly.

None of that is to deny some of the central charges animating this movement. Yes, tenure protects some egregiously ridiculous people. Yes, large lecture halls are crappy learning environments. Yes, tuition at some colleges is going up much faster than family income. (To the people most concerned about that, I say, HELLO! CC'S HERE! HAALLLOOO!!!!) Yes, at many colleges, people are hired to teach, but fired for not doing research. Yes, much of the research that is produced is absurd, or annoyingly esoteric, or even just wrong. Yes, the adjunct trend is offensive on a great many levels. Yes, the internal politics of colleges and universities often stymie productive reforms. (Astute readers of this blog may have noticed me spouting off on that every now and then.) Yes, college reputations are often hard to trace to anything resembling a 'cause.' All of that, granted.

But to respond with a call for a mandatory systemwide lobotomy just doesn't help, except possibly as a bogeyman. A single blunt instrument is inappropriate for such a diverse field. It's a pseudo-populist gesture designed to elicit knee-jerk affirmation from people who know a little but feel a lot.

I'd much rather engage in the (considerably harder) work of re-engineering our ways of doing business to achieve stronger outcomes appropriate to each kind of institution. That may well involve looking closely at what we reward (and whether anybody should be made bulletproof), at how we recruit, at what and how we teach. Regular readers know my impatience with the status quo on many of these. The way to do that, it seems to me, is first to accept that different colleges have different missions. Until we can sell the public on that, we'll be stuck playing defense against one-size-fits-all solutions like this. It's easier to measure a quart of snake oil than a quart of education.

Comments:
(tongue firmly planted in cheek) - Since national standardized testing works so well at the K-12 level, it seems a natural extension to emulate that in higher ed.
 
The same people who insisted on NCLB are insisting on standard assessment of higher ed. Clearly those people don't understand either.

As a (former) memeber of my college's assessment committee, we think that a national assessment is a nightmare.... we've had a hard enough time getting one rolling on our campus.

That being said, there isn't anything wrong with each college doing some work to prove that they are achieving the educational goals they claim to be achieving. Especially at the Ivys, they could do a pre and post test on something like writing to show that their students increased their writing skills in return for their tuition dollars.... that would force the schools to prove that, not only are they only admitting the best and the brightest, but that those students are actually learning something.
 
One of the best treatments of assessment in higher ed (he says, as he has a two-hour assessment committee meeting this afternoon) is the AAC&U "Our Students' Best Work", which can be found here:

http://www.aacu.org/publications/pdfs/StudentsBestReport.pdf
 
Sounds like the basis for a great LTE!
 
Derek Bok's recent book Our Underachieving Colleges, suggests some core areas that 4-year colleges should focus on to measure achievement. (Actually, many of the things he talks about would apply to CCs as well.) Of course, he is not suggesting national testing of any kind, but rather that each university develop its own way of measuring student achievement in order to improve pedagogy in areas like teaching writing, analytical thinking, etc. The AAC&U paper Steven refers to appears, at first glance, to take a similar approach.

Bok makes a lot of good points in the book, which is definitely worth a read. For instance, he points out that many undergraduate course requirements are there "for form" but can't really accomplish their stated goals, e.g. language study is often required but almost never enough semesters to actually get anywhere close to fluency. Another good point he makes is teaching writing is very important, but rarely are many resources devoted to it. Bok feels, correctly I think, that faculty are too set in their ways of teaching, and to quick to dismiss evidence of the effectiveness of alternative approaches with "well, that might have worked in that study but here things are different because of X, Y, and Z"; Bok wants to measure student achievement at the university level to provide a way validating those alternative pedagogies appropriate for a particular university.
 
"But to respond with a call for a mandatory systemwide lobotomy just doesn't help..."

I'd go further. It's insane.

I've been dealing, directly and indirectly, with the "outcomes assessment" movement since the late 1980s, and, so far as I can tell, we haven't yet reached the beginning, which is what it is we want/need/plan to assess. Largely for the reasons you provied, of which the most immportant is the diversity of goals--student goals, institutional goals, societal goals.

This does not mean doing nothing, but it does mean taking seriously the diversity of outcomes--and then taking seriously finding a diversity of ways to address that diversity. Some of those ways may be quantitative (I'm a numbers guy myself), but a lot of it may be qualitative (think about "measuring" how well someone understands poetry).

Makes me glad I am no longer administrating, and that I'll retire in 6 years.
 
I began with a quick response here, but it soon turned into an unwieldy rant, best seen here.
 
For all those people who complain that the cost of college is going up because of administrative bloat, just wait for the six-figure jobs that will be created by an assessment machine at each school...
 
I so agree.
 
Assessment? Simple. Just look at zip codes, and you'll get the same information that the GRE or SAT or any other standardized test will give you.

The only thing that these tests measure with any accuracy is socio-economic background.
 
Okay, anon...

So are you arguing that one's socio-economic status is in no way connected to one's ability to perform and thus move up (or down) in the socio-economic strata?

This is difficult to discuss, and can easily degenerate into so many other areas, but one really should deal with the concept that people with ability (mental, physical, whatever) will be rewarded, and thus move up through the levels of society.

You can show correlation. Causation is something else again. Tests measure one's ability, and the correlation to "zipcode" may simply be that.
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?