Monday, November 24, 2008

Ask the Administrator: Institutional Research

An occasional correspondent writes:


Let's say you get a piece of paper with a report, pie chart,
etc.on it that presents some pieces of information.  Maybe it's from
the registrar, saying that enrollment in elective courses is up while
that in core courses is down.  Maybe it's from the development office,
saying that we raised 6% more than we did last year.  Maybe it's from
admissions, comparing numbers of applications from the last couple
years.  It might be good, bad, or indifferent news, counterintuitive
or blindingly obvious.  In any case--how do you know it's accurate?
What checks are in place to verify the accuracy of information like
this.  Financial statements are audited every year.  What about the
rest of the mass of data that an institution accumulates?


Most colleges of sufficient size have something like an Office of Institutional Research. (Sometimes the office consists of just one person, but I've seen it consist of a real staff, too.) Sometimes the IR office is located in Academic Affairs, sometimes in Student Affairs, and sometimes in some other corner of the institution. (For whatever reason, I've often seen it coupled with the Foundation.)

The IR office is charged with generating data to populate various reports, both required and discretionary. The Federal government requires all kinds of data reporting, to document the use of financial aid, the direction of graduation rates, different achievement levels by race and gender, etc. The colleges don't have the option of ignoring these, at least if they want their students to be eligible for Federal financial aid. Additionally, it's not unusual for grantors to want periodic updates on issues of concern to them, and the smarter academic administrations will generate plenty of queries of their own, the better to enable data-based decision-making. (As opposed to, I guess, faith-based.)

I've had some strange experiences in dealing with IR offices. As a fan of data-based decisions, I usually get the frequent-customer discount with the IR folk. In the course of earning that discount, though, I've learned anew that data are only as good as the queries behind them.

Take a simple question, like “what's the college's retention rate?” Fall-to-Spring, or Fall-to-Fall? First-time, full-time students (the federally mandated data), or all students? Matriculated students only? What about students who transferred out after a year and are now pursuing four-year degrees? (We have a significant number of those, and they count as 'attrition' for us and 'grads' for the four-year schools. It's a persistent and annoying bit of data bias.) What about students who never intended to stay? Students who withdrew last Fall, stayed away last Spring, and returned this Fall? And at what point in the semester do we count them as attending? (We've usually used the tenth day, though any given moment is obviously imperfect.)

Graduation rates are even tougher, since we don't track students once they've left. Based on feedback from some of the four-year schools around us, we know that a significant number of students who leave us early get degrees from them, but it's hard to get solid data.

Moving from institution-level data to program-level is that much worse. If a student switches majors and later graduates, should that show up as attrition for the first program? Is it a sign of an institutional failure, or is it simply something that students do? Again, defining the variables is half the battle.

In terms of information that doesn't come from the IR office, accuracy can be trickier. That's because the whole purpose of the IR office is to provide data; when other offices do it, they're doing it for a reason. Some data are relatively easy to verify, so I'd tend to believe them: number of admitted students, say, or number of donors to the foundation. Others are tougher. For example, something as seemingly-straightforward as “percentage of courses taught by adjuncts” can be calculated in any number of ways. If a full-time professor teaches an extra course as an overload, and gets adjunct pay for it, does that course count as 'full-time' or 'adjunct'? Do non-credit courses count? Do remedial courses count, since they don't carry graduation credit? What about summer courses? Do you count course sections, credit hours, or student seat time? Do you count numbers of adjuncts, or the courses taught by them? (This is not a trivial distinction. Say you have two full-time faculty teaching five courses each, and four adjuncts teaching two courses each. It's true to say that you have a 5:4 ratio of full-time to adjunct courses: it's equally true to say you have a 2:1 ratio of adjunct to full-time faculty. Generally, the statistic chosen will reflect the desired point.)

Annoyingly, college ERP systems tend to be clunky enough that even well-intended people can generate terrible data, simply based on errors in how students or programs get coded in the system. I've lived through enough ERP-generated nightmares to wince at the very mention of the acronym.

Verifiability is tough to answer across the board. Data can be false, or they can be accurate-but-misleading, or they can be ill-defined, or they can be artifacts of system errors. My rule of thumb is that the worst errors can usually be sniffed out by cross-referencing. If a given data point is a wild outlier from everything else you've seen, there's probably a reason. It's not a perfect indicator, but it has served me tolerably well.

Wise and worldly readers – any thoughts on this one?

Have a question? Ask the Administrator at deandad (at) gmail (dot) com.