Wednesday, January 14, 2015
Wise and worldly readers, what say you? What data would you actually find useful?
For example, if the data show that there is no correlation between pass rates and using the tutoring centre, then I'd be less likely to recommend that to a student (and more likely to question, at an institutional level, if the centre is the best use of limited resources). If there is a positive correlation, then it's worth recommending to students who are struggling.
Not knowing in advance what data will be useful, I'd be inclined to make it all available, and (if possible) run some serious factor analysis to see what meaningful correlations exist.
And seeing how many non-math people have trouble understanding data, I'd have some math profs handy who would be willing, on-the-fly, to point out the limits of conclusions that may be drawn from the data.
(Case in point, I remember a colleague in the humanities vehemently insisting that a 50% failure rate in one cohort was too high, that it shouldn't be higher than 20% — and not understanding that with only two students in the cohort there were no options other than 0, 50, and 100%. As a math geek, not a people person, I'm still not certain if they really didn't understand numbers, or if the numbers were a handy way to boost their 'caring person' cred with other faculty.)
If I knew that many of the students in Math 123 went on to Science 234, then I could have a conversation with the Science faculty about what aspects of Math 123 serve their students well.
Similarly, if I'm teaching the Math 123-124-125 sequence, I only see the students who continue in the sequence. For those who disappear after the first term, I have no idea whether something went wrong, or whether there's a good reason for them to take only the first course.
Second, everything "pta" said above. I would add info about success at the next level based on grade in the previous level class. The people teaching class A with its high failure rate don't want to hear that they are (barely) passing kids who will never pass class B, but they need to hear it. Then the discussion can move to whether topic X should be dropped (because no one actually learned it or ever uses it again) in favor of topic Y that turns out to be a matter of pass or fail in the next semester.
The biggest problem I have is that someone does a study (such as success in college algebra based on how students got into it: SAT or ACT score, placement test, passing pre-req class) one year, and that is it. What about next year? Or the next, after the pre-req gets tweaked or students have even more years of NCLB testing in their past?
Great news about your grad rate. Even better if their success after transfer is as high or higher than it was before. Those are the data I most want to see at my college.
Obviously there could be a lot of political issues involved in this sort of analysis, and from a statistical standpoint it's hard to generalize from small numbers. But it's an important question nonetheless.
I would also like to know how quickly demographics change. It would be useful to see how our international students have responded to specific marketing pushes.
LGBQT rates might inform some of my classroom behavior. I have a colleague who asks on the first day what pronoun students would prefer she use for them.
Crime rates would be useful too. I presume most universities send the same security report I get each year but in the context of graduation and retention, it might mean something else.
Our school initially looked at "gateway classes" with low pass rates. However, we found that those were partially tied to some instructors who had lower pass rates. And the instructors with low pass rates had students who had less time in college, and so were less likely to pass. Causality is hard to track down in situations like that, but everybody involved had their own pet theory.
Data that shocks people tends to mobilize, if departments are enabled to make a change. What's an average textbook cost for a full-time student? What is the distribution of pass rates (by section) in one of your high-enrolled classes? They'll ask more questions, and try to get answers themselves. If you've got the IR capacity, that's wonderful. In general, we've found it difficult to share data at an all-campus meeting which helps individual instructors. The scale is too different.
I'd work with what IS in the control of the faculty for the most part. I second (or I guess third) everything pta said, so much so, and I'm with CCPhysicist, too.
Finally, like Nik said, LGBTQ, but also who else is in your classroom. Diversity comes in lots of forms and that would be a great way to help work on getting people to understand that working toward an inclusive classroom would benefit the people they already have in the room.
I'd also be interested in the predictive power of each prerequisite for each course. If the grade in a prereq has little predictive power for the grade in a course, it might be time to remove the prereq. But that is a very detailed level of analysis that is not appropriate for a big meeting.
Time-to-degree/transfer for each program would also be interesting, as would determining what the "bottleneck" classes are.