Wednesday, January 14, 2015

 

"Useful"


Next week a colleague and I are hosting a presentation for faculty and staff in which we’ll share the institutional data used for decision-making.  It’s not the first time we’ve done it; we had a similar “data day’ last year.  The idea was to honor the goals of transparency and shared governance by making sure that everybody had access to the same facts.  An informed campus, I assumed, would have a better shot at making good decisions.

This year brings updates to the data, including -- spoiler alert -- a pretty dramatic jump in the graduation rate.  That’s great, as far as it goes.  I’m always happy to share good news.  But I’m wondering now about a different way to select the data.

Information like course completion rates and graduation rates can help set a context for institution-level decisions, which can be useful in making the discussions in governance more grounded.  But I’m not sure that it’s terribly useful in the classroom, or when trying to help an individual student.  In other words, a professor on the college Senate may find this stuff helpful, but someone who’s mostly focused on preparing and improving her own classes might not.

This is where I’m hoping my wise and worldly readers can help.

For readers who work in faculty or staff roles: what kind of data would you find useful in the context of your day-to-day work?  Would it be something like the difference in pass rates between students who use the tutoring center and students who don’t?  Differences in gen ed outcomes assessment scores by major?  Percentage of students with reliable internet access at home?  Percentage of students who report not buying textbooks?  Percentage of students who work more than twenty hours a week for pay?

I’m trying to move from data that only useful at the institutional level to stuff that faculty and staff could use for making their own decisions.

Wise and worldly readers, what say you?  What data would you actually find useful?

Comments:
Correlations between individual student success and the various factors that do (and don't) affect it.

For example, if the data show that there is no correlation between pass rates and using the tutoring centre, then I'd be less likely to recommend that to a student (and more likely to question, at an institutional level, if the centre is the best use of limited resources). If there is a positive correlation, then it's worth recommending to students who are struggling.

Not knowing in advance what data will be useful, I'd be inclined to make it all available, and (if possible) run some serious factor analysis to see what meaningful correlations exist.

And seeing how many non-math people have trouble understanding data, I'd have some math profs handy who would be willing, on-the-fly, to point out the limits of conclusions that may be drawn from the data.

(Case in point, I remember a colleague in the humanities vehemently insisting that a 50% failure rate in one cohort was too high, that it shouldn't be higher than 20% — and not understanding that with only two students in the cohort there were no options other than 0, 50, and 100%. As a math geek, not a people person, I'm still not certain if they really didn't understand numbers, or if the numbers were a handy way to boost their 'caring person' cred with other faculty.)
 
We've had a difficult time at my institution answering the question "What happens to students after they leave my class?"; ie, what, exactly, is the population I'm serving. Which courses do they tend to take next? Do they transfer? Stick around?

If I knew that many of the students in Math 123 went on to Science 234, then I could have a conversation with the Science faculty about what aspects of Math 123 serve their students well.

Similarly, if I'm teaching the Math 123-124-125 sequence, I only see the students who continue in the sequence. For those who disappear after the first term, I have no idea whether something went wrong, or whether there's a good reason for them to take only the first course.
 
First, I have seen the data on use of the tutoring center at my CC, and everyone should see it so they can share it with students. And I'd be hugely pleased if my own CC's admin had a clue about how often students read e-mail or how important e-mail from faculty gets lost in the deluge of spam-like announcements from the college PR folks.

Second, everything "pta" said above. I would add info about success at the next level based on grade in the previous level class. The people teaching class A with its high failure rate don't want to hear that they are (barely) passing kids who will never pass class B, but they need to hear it. Then the discussion can move to whether topic X should be dropped (because no one actually learned it or ever uses it again) in favor of topic Y that turns out to be a matter of pass or fail in the next semester.

The biggest problem I have is that someone does a study (such as success in college algebra based on how students got into it: SAT or ACT score, placement test, passing pre-req class) one year, and that is it. What about next year? Or the next, after the pre-req gets tweaked or students have even more years of NCLB testing in their past?

Great news about your grad rate. Even better if their success after transfer is as high or higher than it was before. Those are the data I most want to see at my college.
 
I'd love to see some data broken down by department. For example, at our college students in hybrid courses have higher success rates than those in online courses. But most of the hybrids are concentrated in a few departments. So are hybrid students more successful because of the course format, or because of the department itself (e.g., maybe some departments have easier courses in general, or attract more talented or motivated students)?

Obviously there could be a lot of political issues involved in this sort of analysis, and from a statistical standpoint it's hard to generalize from small numbers. But it's an important question nonetheless.
 
Like anonymous, and, in particular anonymous comment 4, I would like to know the relationship between online success rates, hybrid, and traditional. I would like to see how quickly the numbers of enrolled students shift among distance programs. Are any of these programs garnering steady enrollment or is it always a crapshoot?

I would also like to know how quickly demographics change. It would be useful to see how our international students have responded to specific marketing pushes.

LGBQT rates might inform some of my classroom behavior. I have a colleague who asks on the first day what pronoun students would prefer she use for them.

Crime rates would be useful too. I presume most universities send the same security report I get each year but in the context of graduation and retention, it might mean something else.


 
I would be careful about pointing out (repeatedly) the difference between causation and correlation for your data - and almost all of your multivariate data will be correlative. For instance, students who go to the Tutoring Center at my school get significantly better grades. Are they doing better because they went to the Tutoring Center? Or do students with time, study skills, etc tend to go to the Tutoring Center more? The truth is probably a combination of the two. However, people will insist on their own interpretation - and not change their mind once it's made up. Selection bias is huge in education data, and people often don't think about it.

Our school initially looked at "gateway classes" with low pass rates. However, we found that those were partially tied to some instructors who had lower pass rates. And the instructors with low pass rates had students who had less time in college, and so were less likely to pass. Causality is hard to track down in situations like that, but everybody involved had their own pet theory.

Data that shocks people tends to mobilize, if departments are enabled to make a change. What's an average textbook cost for a full-time student? What is the distribution of pass rates (by section) in one of your high-enrolled classes? They'll ask more questions, and try to get answers themselves. If you've got the IR capacity, that's wonderful. In general, we've found it difficult to share data at an all-campus meeting which helps individual instructors. The scale is too different.
 
Last year I was on a working group that got NSSE data comparing students from our faculty to the whole-campus data. What I found most compelling was that our students were commuting longer hours and working off-campus more hours for lower pay than students in other faculties (on average). That clearly has implications for the Dean's office and the kinds of support they (should) offer, and as an individual instructor, it helps me remember to be flexible with deadlines and grading schemes, because so many students have serious responsibilities outside of class.
 
I'd be reluctant to just show a bunch of correlations with success; the ones you mention seem largely out of the hands of the faculty. I can tell my students it's great to visit the tutoring center and they are more likely to succeed if they do, but some just won't. I can know they'll do better if they have internet at home, but if they don't, I can't make them. They likely don't have it for reasons larger than me. Worse, there are some faculty who will use these data to presume that some students simply won't succeed.

I'd work with what IS in the control of the faculty for the most part. I second (or I guess third) everything pta said, so much so, and I'm with CCPhysicist, too.

Finally, like Nik said, LGBTQ, but also who else is in your classroom. Diversity comes in lots of forms and that would be a great way to help work on getting people to understand that working toward an inclusive classroom would benefit the people they already have in the room.
 
Definitely attrition rates from level to level in a series of courses. We ran this data once for foreign language classes and discovered one language lost nearly 90% of its students from semester 2 to semester 3, while the other languages only lost about 15%. Since there were multiple instructors in the high-attrition language, we knew it wasn't a teaching problem--rescheduling the courses so they didn't interfere with other required courses brought the attrition rate down to something closer to the other languages.
 
I would also like data on job placements of alums - what roles and sectors do they work in? We keep up with some of them, but they probably aren't representative.
 
I would like to see pre and post assessment done. For example, if all students leave English 1 and can write a paragraph, that's good. But how many of them could do that before they entered the course? For many of our assessed outcomes, I would like to see a pre-assessment to see how far the students have come rather than just a post test that assumes that they started from zero. I would also like to see student performance stratefied by age and gender as well as race and by socioeconomic status.
 
As an engineering faculty member, I'm interested in gender balance, and where in the pipeline it gets so far out of whack. (I just checked for our program, and the gender balance is about the same at admissions as at graduation—the problem is with who is applying.)

I'd also be interested in the predictive power of each prerequisite for each course. If the grade in a prereq has little predictive power for the grade in a course, it might be time to remove the prereq. But that is a very detailed level of analysis that is not appropriate for a big meeting.

Time-to-degree/transfer for each program would also be interesting, as would determining what the "bottleneck" classes are.
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?