Tuesday, March 20, 2007

 

Just Can't Work Up Much Outrage on This One

According to IHE, Arizona State University has written several performance incentives into the contract for its current President, including a substantial monetary reward for improving the university's ranking in U.S. News. The usual suspects are shocked and appalled.

The quote that rang a bell for me (quoting Raymond Cotton, a lawyer who frequently negotiates presidential contracts): “it is inappropriate 'for a board of trustees to turn their own priority setting authority over to a third party'...”

Hmm.

Working in a cc, I'll admit a certain immunity to U.S. News rankings. They don't even notice us, so we don't really concern ourselves with them. To the extent I notice them at all, it's just to make sure that my alma mater continues to kick its rival's sorry butt. Which it does.

And I still don't quite understand who died and made Mort Zuckerman the arbiter of educational excellence. I'm not disputing his first amendment right to publish it; I'm just wondering why people take the rankings seriously. I could imagine any number of alternative ranking criteria, and not just those based on some ideological agenda (most Christian, most conservative-friendly, etc.).

All of that said, though, haven't most 'competitive' colleges and universities already effectively outsourced their tenure decisions to academic publishers? Most of which make publication decisions on (perceived) marketability?

Again, at the cc level, we're largely immune to that. We base tenure decisions primarily on teaching. Some indication of an attempt to keep active in the scholarly field is always helpful, but nobody has ever been fired for not publishing, as far as I know. So I can ask the question without really having a dog in the fight.

It seems to me that decisions about the marketability of a book are largely independent of the book's scholarly merits. (Either that, or our leading political scientists are Ann Coulter and Al Franken. Shoot me.) To the extent that's true, then basing tenure decisions on having a book or two out there is a reflection of salability, rather than quality. To that extent, ASU's move is simply the logical conclusion of a process already long-established.

For that matter, aren't the big athletic conference rankings based on polls, rather than simple win-loss records? To that extent, haven't we outsourced measures of success to journalists?

Again, I'm not defending U.S. News per se; I'm just not sure that this is the radical break it resembles at first blush.

Honestly, I don't see an argument for ignoring external measures. To a great degree, I think, enrollments function as a sort of external measure; if a college takes a seriously wrong turn, students will say so with their feet. Public colleges and universities absolutely need to respond to governmental responses, even when those responses are hamhanded, malicious, or simply stupid. So now we get indignant over a magazine?

Suggestion: those who really object to the U.S. News rankings stop trying to replace something with nothing. Instead, come up with a more valid system of rankings, and publicize the hell out of that.

For example, it's not obvious to me that there's a one-to-one correlation between, say, size of endowment and quality of education. Nor is it clear to me that it makes any sense at all to punish colleges with substantial numbers of adult students, which is what happens when 'time to degree' is a criterion. (At the very least, it should be possible to control for that variable.) And the old (possibly apocryphal) anecdote about the high ranking of the Princeton Law School (it doesn't have one) speaks to the power of the 'halo effect' of an institution's overall profile.

One of the great benefits of the blog boom has been the sudden easy-and-cheap availability of soapboxes, for those so inclined. Is U.S. News badly flawed? Okay. Do your own. The alternative to flawed measurements isn't no measurements; it's better ones.


Comments:
The Washington Monthly has published an alternative to the US News rankings for a few years. Their 2006 rankings can be found at http://www.washingtonmonthly.com/features/2006/0609.collegechart.html. As I recall, the basic idea of the Washington Monthly was to look at three broad features of colleges. First, how good a job does the college do at fostering social mobility? Second, does it "foster scientific and humanistic research". And third, does the college promote an "ethic of service to our country?" The Washington Monthly editors came up with a number of ways of measuring different qualities. For example, I think rates of alum participation in the Peace Corp, Americorp, Teach for America, the military, etc., are used to measure the "ethic of service" criteria.

There's certainly much to critique about the Washington Monthly rankings. I don't know if they're necessarily better than the US News rankings. But they're different, and probably no less arbitrary than US News. For better and for worse, various external measures are here to stay. It seems to me that diversifying these measures is probably a good thing.
 
Schools in the South, through the accrediting agency Southern Association of Colleges and Schools (SACS), has developed a system to combat this phonomenon---Outcomes! US News tends to focus on inputs, which we know has little to do with outputs, i.e. student learning. Our focus is on student learning and proving that students are better for spending their time with us. It's a fun time to be in education, yet filled with rethinking the way we do business.

Besides, this is our attempt to foil the Spellings Report and get away from a nationalized exit exam for all post-secondary institutions. If you haven't read Spellings, please do. It is frightening in that it tries to box all post-secondaries, (CC to Ivies) into one neat little box!
 
The biggest problem with the US News rankings is that they measure "quality" by "inputs" rather than "value added." I'd be much more interested in seeing a ranking of schools based on the degree to which their seniors over-perform what would be predicted based on their characteristics as high-school graduates and by the school's resources. That would tell us how good the *school* was as opposed to the students or resources they're working with are.
 
"I'd be much more interested in seeing a ranking of schools based on the degree to which their seniors over-perform what would be predicted based on their characteristics as high-school graduates and by the school's resources."

Is the change in growth of contributions to an endowment a fair proxy for this?
 
Keyser,

Is your assumption that over-performing students will "pay back" the institution by contributing more as alums? If so, I'd rather measure that by the percentage of alums contributing rather than the dollar amount as economic success is not the main measure of "good outcomes" I would use.
 
I wasn't aware of Washington Monthly's 2006 rankings so I took a look at the website that dave provided (thanks dave!). Perhaps I should have taken the sentence "I don't know if they're necessarily better than the US News rankings" more seriously. Sigh... It's true. What really made me lose complete respect for these rankings was the following:

"45. Washington University in St. Louis (WA) - 120"
Yeah, last time I checked St. Louis was in Missouri. And it was ranked 12th last year (by US News), not 120th.
I understand that we all make mistakes, and I understand that's it's probably not fair to discount an entire list because of one mistake. And it's my choice to do it anyway.

Seriously though, thanks for the additional information. I always find this topic very interesting, and also quite confusing.
 
I'd just observe that in the sciences, tenure at the R1 schools is much more a function of peer-reviewed journal articles and reputation in the field than published books. And since you pay to publish your research (publications fees on my last two papers were in the $1k - $2k range -- thank goodness scrounging that up out of grants is my advisors' responsibility), acceptance into journals is much more a function of your peer reviews than the marketability of the research (with the possible exception of the very high-profile, cross-field journals like Science and Nature).
 
Actually, the athletic conference rankings are about the only rational thing in Division 1-A sports--everything from the BCS bowl insanity to the NCAA tournament selections are based on darned near a million other things but won-loss records. Which, on the one hand, does keep schools from only playing the dozens of bottom-feeder schools that are only in Div. 1-A to get big paychecks from getting stomped by big-name opponents and claiming that their 13-0 record means they should be playing for a national title. But it also means that there's no way to come anywhere close to determining who's the best at football.
But when you look at the academic situation, what do you expect?
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?