Tuesday, May 19, 2015

 

The Problem with Longitudinal Data


This may be the unsexiest title ever, but the subject matters.

This week we got the latest data on our six-year student success rate.  It’s supposed to tell us how we’re doing, and in a global sense, it does.  But it has a glaring flaw that reduces its usefulness in driving change, and renders it absurd for use in performance funding.

It’s at least six years old.  

In fact, it’s slightly older than that, due to the delay in gathering data.  Which means that we just got numbers for the cohort that entered in the Fall of 2007.

People who study retention data insist that the lion’s share of attrition happens in the first year.  That means that the hot-off-the-presses numbers we’re getting now are mostly reflective of what happened in the Fall of 2007 and the Spring of 2008.  That was before the Great Recession, the enrollment spike of 2009-10, its subsequent retreat, and the largest wave of state cuts in memory.  It reflects what was, demographically, a different era.  And it misses everything we’ve done in the last six years, since someone who dropped out in early 2008 missed the innovations introduced in 2010 or 2012.  

In other words, as a reflection of what we’re doing now, it really doesn’t help.

It’s possible to get much more recent data, of course, but it’s necessarily partial.  In any given year, indicators can point in seemingly contradictory directions; the underlying picture may not become clear until long after it has ceased to be useful.  The owl of Minerva spreads its wings at dusk, by which time it’s too late.

From a system perspective, longitudinal data has real value.  It can serve usefully as a reality check or a diagnostic, especially when the data are chosen to reflect a sound theory.  For example, I’m a fan of the surveys that show the percentage of state university grads who have some community college credits.  The percentages are so much higher than cc grad rates that they strongly suggest that we’re asking the wrong questions.  They don’t shed much light on individual campuses, but they strongly suggest that the ecosystem is more than the sum of its parts.  We’d be wise to keep that in mind when having discussions of, say, funding policy.

But drilling down from a long-term systemic view to a single campus and year-to-year variations in funding is problematic at best.  

On campus, it’s difficult to run “clean” experiments, since we can’t isolate interventions.  In any given year, we’re trying multiple things, and the external environment is changing in a host of ways.  Did a one-point gain last year reflect a policy shift, a demographic shift, better execution, or random chance?  It’s hard to know.

Has anyone out there found a really good, really early indicator that’s actually useful in improving institutional performance?  Right now, we have to choose between timely and good, and that’s a frustrating choice.

Comments:
True longitudinal data follows the student everywhere, not just at your institution, and would answer some of those questions. It would also follow those where were not full time FTIC.

My question is, are you like my college, where we identify a special cohort (usually tied to our reaffirmation of accreditation cycle) and follow it to get results at the start of the self-study period? Or do you have an entire series of these studies, so you also have 5-year, 4-year, ..., 1-year data?

AFAICT, we do not do the latter, but we do run smaller studies of everyone taking a particular course to identify issues and (with a different group) see what happens after some innovation has been tried.
 
Attention Website Owner,

I think my actions might soon create a Terms of Service violation for your website from Google.
In seeking to create as much traffic for my website as possible, I engaged in manipulative SEO practices which understandably resulted in Google’s penalty.
The penalty may soon spread to your website as a result of the backlinks to findyourartschool.com which can be found at the following locations:

http://suburbdad.blogspot.fr/2012/03/ask-administrator-getting-your.html


I am deeply sorry for any harm I may have caused, but I remain confident that this problem can be fully resolved if you remove the backlinks as soon as possible.
In doing so, both of our websites will be brought into better alignment with Google’s guidelines and our search visibility will be restored.
I would be glad to assist you or answer any further questions or clarifications that you might need.
I urge you to contact me if you need any assistance or upon the deletion of the backlinks.

Thank you and sorry once again.

 
Dear Webmaster:

We've discovered that a company we hired to help promote our site, findyourartschool.com, used a variety of questionable techniques to secure links to our website. These links were placed purely for SEO purposes with the intention of manipulating search rankings.It appears there may be links like this that have been placed on your site.The presence of these links is harmful to our site's good standing with search engines, and unfortunately, retaining them may also be harmful to your own site's reputation.We ask that you please remove any links on your site that link to findyourartschool.com. So far as we are aware, there are (or have been) links at these URLs:
http://suburbdad.blogspot.se/2012/03/ask-administrator-getting-your.html

We'd greatly appreciate your help with resolving this problem.Please let us know once the links to findyourartschool.com have been removed by return email.If you need any more information from us, please email me and I'll be happy to assist.We apologize for any inconvenience this has caused you and do appreciate your help.

 
Look into difference in differences. If the intervention affects one group and not another, you can study it's effects.
 
Agreeing with CCPhysicist (as usual). We find longitudinal data most useful as our system tracks students across institutions and even what they did in high school.

Also, the only thing worse than having longitudinal data is not having them.
 
nicoleandmaggie's point @4:46AM reminded me of something I pointed out to our local folks:

If you see something coming down the pipe, like some sub-cohort that you might want to track for some reason (a new admit planning to transfer to a specific school under one your new programs), you want an ERP that allows you to add or modify a field so there is a convenient sorting flag.

I know that it requires a major effort to look at differences in differences when the treatment group can only be identified by course/section numbers for a given term, like when looking at the longitudinal effects of a trial pedagogy or hybrid etc.
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?