Sunday, May 05, 2013


Measuring Success

How do you know when a college is doing a good job?

The traditional answer was either “by reputation,” or “because the faculty tells you so.”  But those are both flawed.  The former is based on a host of factors having little or nothing to do with teaching and learning -- the fate of the basketball team, say -- and the latter is beset by a basic conflict of interest.  “We’re experts -- just ask us!” doesn’t convince most laypeople, nor should it.  

But if we don’t use those standards, what should we use?

U.S. News uses “inputs,” such as money spent per student, or volumes held in a library.  But spending money and getting results are two different things.  In fact, there’s a valid argument to be made that getting better results with less money reflects good management, even though it would get punished in the U.S. News ratings.

We could use the results of “outcomes assessment.”  That has been the goal/fear of many in higher education for over a decade.  But outcomes assessment doesn’t lend itself to such broad-stroke judgments.  If it’s done with an eye towards generating useful results for actual improvement, then it tends to be locally defined, and therefore resistant to cross-institutional comparisons.  If it’s done with a blunt instrument, like a standardized test, then it will tend to miss most of what colleges actually do.  If the test is low-stakes, then students may not buy into it.  If it’s high-stakes, then it will generate the same “teaching to the test” and widespread cheating that NCLB generated at the high school level.  And although we don’t have an elegant language for talking about it, colleges with high entrance standards will always, inevitably, score higher on standardized measures than will colleges with open doors.  That’s by design, and it does not reflect on the quality of work done by either college.  Give me the same entering class as Swarthmore, and I’ll show you some damn good test scores.  At a certain level, we’re still measuring inputs.

Alternately, we could go with graduation rates.  But that, too, is flawed at best.  Graduation rates have a lot to do with student goals, for example.  Any community college administrator can rattle off some basic flaws of the IPEDS database: it only counts “first-time, full-time, degree-seeking” students, who constitute a minority of community college enrollees; it ignores student intention, so a student who only intends to spend a year at a cc before transferring counts as a dropout; and it stops counting quickly, so students who switch to part-time status are counted as failures, even if they graduate.  I’m not saying this out of sour grapes: Holyoke has one of the highest graduation rates among cc’s in the state, despite being in the lowest-income city in the state.  We punch well above our weight.  But a flawed measure is a flawed measure.

We could go with starting salaries and/or job placement rates: since the Great Recession started, that has been the political favorite.  But that tells you much more about the local economy than it does about the college.  New graduates will have a much easier time finding work in New York City than in Buffalo, regardless of how well their college taught them.  Community colleges with strong “transfer” identities will suffer in the comparison, even though their graduates are actually setting themselves up for long-term success.  (A college junior on a pre-med track isn’t making much yet.)  And the programmatic mix at a college will have much more impact on starting salaries than will gradations of quality within each program.  Nursing grads will start at higher salaries than will journalism grads, even if the journalism program is really good.  

At an even more basic level, asking how well a college is doing presupposes knowing what it’s supposed to be doing.  Research universities have different missions than do community colleges: the former is supposed to do cutting-edge research, and the latter is supposed to help everybody.  The former is based on meritocracy, and the latter on democracy.  Yes, colleges have mission statements, but they tend to be broad and vague.  In the absence of a relatively robust definition of a given college’s mission and place in the academic universe, it’s far too easy to get the measure wrong.  As Einstein supposedly put it, you don’t judge a fish by its ability to climb a tree.

As “performance funding” measures catch on, the stakes of this topic are moving from reputational to financial.  This stuff matters.

So wise and worldly readers, I turn to you.  How do you know when a college is doing a good job?

How about asking former students five years after they left the college (either with a degree or not) whether they feel they got their money's worth?
First, standardized tests can measure value added if given at the start of the year as well as at the end of the year. That sort of longitudinal approach might not put the schools with high inputs in such a favorable light.

As usual, your speculations lead to some insight. I think the emphasis should be on "what they are supposed to be doing" for each important population on campus. It does have to be robust and measurable. That sort of thing is what the discussion leading to Outcomes is supposed to focus on, as I understand the ideal case.

For undergrads (and I don't give a research U a free pass here) it should be whether they are ready for the next class at the college, the classes at a transfer institution for a CC or U, and their job or graduate program performance after the AS, BA, or BS. A research U would have additional measures for graduate students after they finish. Asking the students, as noted in the comment above, is also a good idea.
CCPHysicist Value Added Testing has to be carefully done.

Falsification is important to consider.
Without revealing too many details, my institution is currently evaluating all of its programs using a common metric; this approach has been done in the US as well, and seems to be fairly effective. If this program can compare athletics to parking to zoology fairly effectively, then why not do it with community colleges? Come up with different questions that would target different types of colleges. For instance:

- What are your institution's research outcomes in the past 10 years?
- What are your institution's teaching outcomes in the past 10 years?
- What is your institution's workplace integration initiatives?

Any given institution will likely do well on only 1 or 2 of those 3 questions, so that allows for a teaching-heavy community college to be compared to a co-op heavy training school. It would by-pass "teaching to the test" and instead, colleges could use survey and graduation numbers as they see fit. Granted, some standardization would need to occur as to what counts as graduation, but you could invent that definition as you go.
Wouldn't it just be awesomely convenient if there were one simple number that would tell you everything you need to know? Sheesh. Talk about a multidimensional problem that deserves multiple data streams and a TAD more thought than most people are willing to give it!
Look at an arbitrary number of years for enrollment figures...say 10 years. Use a statistical formula to see if local economy is significantly correlated to enrollment numbers and you will know if students continue to enroll each semester because the college is doing a good job or because there are few jobs in the local economy.

Statistical analysis can give you that answer.

It seems to me that students who have more than one option of where to attend the first two years of college will attend colleges whose reputations are excellent and affordable for their budgets. Therefore, a college is doing a good job if a base enrollment number of students continues at or above the same level each semester regardless of the local economy.

No tests needed, just the opinions of the community demonstrated by continued or increased enrollment.

I think sometimes legislators get sold on "making sure our tax dollars are getting a good return" and this attitude drives testing, which may or may not give the answer.

SES is statistically correlated with achievement on standardized tests, according to all research I have seen, so they are not a good way. Low SES= lower test scores; high SES = higher test scores.

Go with the numbers of consumers (students). Companies can see how many consumers use their goods as an indicator that they are doing a good job. That is real world. Maybe colleges can use the same measure.
This has turned out to be a pretty interesting discussion. Let throw in another point: like any other aspect of post-secondary ed, any result is likely to be context dependent.

In another sense, once a school starts to publicly discuss measuring success it may signal that the school is in trouble, to some extent.

Think of this way: suppose both Stanford and Backwoods CC implement the same measurement program to evaluate their programs. Suppose both do badly according to these measurements. Backwoods CC may face serious consequences like cuts to funding or lessened resources, while Stanford probably won't (you may also substitute a public institution like U of T at Austin if you'd rather compare public to public).

When conservatives hate it.

You have a group of people who is opposed to math, science, empowerment, and prosperity. If they hate a given college, it is, by definition, doing well. Consider it reality-based crowdsourcing.

Thanks, PonderingFool @5:25AM. There is no tracking in college (apart from Honors classes) but there are still uncontrolled variables involving how a student came to be in my classroom.

GradStudent @6:07AM appears to accept the idea that a large research university with 8,000 freshmen is not a "teaching heavy" institution.
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?