Friday, February 11, 2011

 

Not Achieving the Dream

Achieving the Dream is an initiative sponsored by the Lumina Foundation and spearheaded by one of my personal heroes, Kay McClenney. It’s an attempt to get community colleges across the country to build ‘cultures of evidence’ about student success. It relies heavily on data-driven decisionmaking, with the goal of prodding colleges to move from the ways things have always been done to the ways that things actually succeed. It’s a great idea, and I’m a fan. (For the record, my college is not an ATD school.)

That said, though, I can’t say I’m shocked at this report. Apparently, a national study has found that colleges that have signed on to ATD have not seen statistically significant gains in any of the measures used to gauge success.

Although my college is not an ATD school, it is working diligently on a number of similar measures to improve student success rates. Here, too, the results so far have been disappointing. And we have one of the better Institutional Research offices around.

Assuming the presence of a strong IR staff, good Presidential support, thoughtfully-constructed interventions, and broad agreement on the overall goal -- all of which are present here -- why aren’t we moving the needle?

I’ll answer the question with another question. Good, strong, solid, peer-reviewed scientific data has made it abundantly clear that poor eating habits lead to obesity and all manner of negative health outcomes. There’s no serious dispute that obesity is a major public health issue in the US. And yet people still overeat. Despite reams of publicity and even Presidential support for good eating and exercise habits, obesity continues to increase. Why?

Sometimes it’s more than a matter of knowing where the problem is.

For example, in the case of student success, there’s the fundamental problem of thin budgets. I’ve seen data suggesting that higher percentages of full-time faculty lead to better student outcomes, and I assume that there’s some truth to that. But we can afford only what we can afford. Knowing that a major increase in the instructional budget might help is of only theoretical interest when we’re taking year after year of operating budget cuts. We’ve shifted money around internally to keep the faculty numbers from slipping, but they haven’t grown, and enrollments have. (And the few remaining deans are stretched so thin that talk of quitting is becoming endemic.)

Thin budgets also manifest themselves in ‘boutique’ interventions that don’t scale up. On my own campus, we’ve had great results with several very labor-intensive programs: supplemental instruction, summer bridge programs, that sort of thing. They’re terrific for the handful of students who have access to them. But we have nothing close to the budget it would take to make those available to all, or even most, students. So we can get good percentage improvements in targeted areas, but the overall numbers don’t really move.

There’s also a fundamental issue of control. Faculty as a group are intensely protective of their absolute control of the classroom. Many hold on to the premodern notion of teaching as a craft, to be practiced and judged solely by members the guild. As with the sabermetric revolution in baseball, old habits die hard, even when the evidence against them is clear and compelling. There’s a real fear among many faculty that moving from “because I say so” to “what the numbers say” will reduce their authority, and in a certain sense, that’s true. In my estimation, this is at the root of much of the resentment against outcomes assessment.

Even where there’s a will, sometimes there just isn’t the time. It’s one thing to reinvent your teaching when you have one class or even two; it’s quite another with five. And when so many of your professors divide their time among different employers, even getting folks into the same room for workshops is a logistical challenge.

Of course, accountability matters. Longtime readers know my position on the tenure system, so I won’t beat that horse again, but it’s an uphill battle to sell disruptive change when people have the option of saying ‘no’ without consequence. The enemy isn’t really direct opposition; it’s foot-dragging.

ATD doesn’t address internal politics of colleges as institutions. That’s entirely fair -- they vary by location, and it would probably kill the project altogether -- but anyone who has tried to make headway on these issues can attest that internal politics can kill almost anything. Short of a massive exogenous shock to the system, it’s hard to imagine what will change that.

More darkly, there’s the unspoken truth that some students will just never make it. Depending on your angle to the universe, the meaning of “some” will vary; I’ve heard serious people argue earnestly that the pass rates we currently have are simply the best we can get, given the students we get. It’s hard not to notice that selective institutions have consistently higher student success rates, even when they herd their students into 300-seat lectures taught by graduate students. When you have open-door admissions, you can’t repackage failure as ‘selectivity;’ instead, you have to own it and get blamed for it. Selective institutions can outsource failure; we don’t have that option.

It’s possible to take the study on ATD as vindication for a sort of fatalism, but I think that would be a mistake. I’m not Panglossian enough to assume that this is the best of all possible worlds. In fact, longtime readers may have seen me make suggests for improvements from time to time. And it strikes me as obviously correct to base strategies for improvement on actual empirical evidence than on unthinking adherence to tradition or, alternately, watered-down caricatures of an idealized corporation. My guess is that we’re only beginning to grapple with some of the deeper issues, many of which will require much more disruptive change than most people suspected at the outset. Whether public institutions have the courage to do that, or whether for-profit competitors will swoop in and eat our collective lunch, I don’t know. But if we’re serious, we’d be well advised to attend even more assiduously to reality-based reform.

Comments:
Actually, there's a lot of good, strong, peer reviewed research demonstrating that obese people consume pretty much the same amount of food that thinner people do, and tend to be more active. And yet they're obese. Check out Gina Kolata's "Rethinking Thin" for a summary of the research.

And though I was first spurred to comment because it drives me crazy how far the "common sense" answers to the supposed obesity crisis are from the research-supported reality, I think this may also illustrate a point about college success. Are we really sure that the research is measuring the right things? When the treatments aren't curing the problem, something isn't right with the match between the treatment and the condition. I don't have an answer, but maybe "common sense" isn't accurate in this case either.
 
CC's cannot be expected to do miracles and fix all the problems that were created in K-12. If evidence-drive decision making means copying the accountability movement from K-12, then it's going to be a dead end. No faculty will sign up, and the numbers will never be encouraging.

Consider the effect of No Child Left Behind. I've seen a noticeable decline in basic math skills of students of all levels in the last 5 years. Every year, I will discovered a new deficiency that was not seen from the previous years (we are talking about Calculus students not able to add fractions). Yet NCLB was assumed to be "working" since the scores were going up. It seems that K-12 was devoting too much time preparing the students for tests, at the cost of killing students' interest in math, trading quality instruction for test-taking skills. Is NCLB a factor in the study? Are socio-economic factors examined in the study?

Personally I like to see accountability become part of the CC culture. But it strikes me that we need better indicators of success. I am tired of seeing administrators standing on a soapbox telling us less than 50% of our dev ed students move on to the next class. This is exactly what turns people off from outcome assessment.
 
"Good, strong, solid, peer-reviewed scientific data has made it abundantly clear that poor eating habits lead to obesity"

Uh, no. It hasn't. I think this is such a widespread meme because people simply assume that eating too much of the wrong kinds of food leads to weight gain. One can observe this on a personal level for a small amount of weight variation, but at the population level? No, the data just isn't there. There's also plenty of data to show fat people eating LESS than thin people (possibly because they get targeted so much with EAT LESS messages that they do so).

"There’s no serious dispute that obesity is a major public health issue in the US."

There's no widely-publicised dispute. But there is a very serious dispute. For example, just as a brief point, we've been getting fatter over the past few years hand in hand with an increased life expectancy, which you wouldn't expect if obesity really was such a problem. The issue of weight and health is actually very tangled, and it's really not clear to what extent the popular views on weight and health are influenced by society's views rather than actual data.

"And yet people still overeat.
Despite reams of publicity and even Presidential support for good eating and exercise habits, obesity continues to increase. Why?"

You have the answer right there under your nose - it's because overeating isn't the cause! Apply Occam's razor: it's incredible to believe that people overeat a lot despite society pressures to not overeat, so the simpler answer is that they don't overeat. The available data suggests that eating habits only influence a few pounds worth of body weight.

Whether we are fat or thin or in-between depends simply on how our metabolic systems are set up to use, store and request energy.

Applying all this to the ATD initiative, I'd have to agree with the other commenter - perhaps you're addressing the wrong things. Another possibility is that you're addressing the right things, but the pressures you're using to apply the right things aren't working, for one reason or another.

An example from my institution: the powers-that-be want us to be more responsive to student emails. Do they examine why our workloads are high and lighten then to provide more time to answer emails? Do they move some in-semester things out of semester, giving us more time to answer emails? No. They send us sternly-worded directives of the form "Thou shalt respond to all student emails within two working days".
 
"ADT" is dean-speak. I can't claim any familiarity with it. But in my neck of the woods, through a painful reaccreditation process, I have become aware of the comparatively weak notion of "Outcomes Measurement." I am still trying to form opinions on the matter, but let me attempt to articulate a poisition (admittedly, from more than a small dose of ignorance).

I believe that such measures will succede in only measuring that which is not worth measuring. In our culture we have trade schools and we have universities. We can measure outcomes in trade schools very well - do students know how to adjust a carburator? Do students know how to troubleshoot DNS host errors? We can make tests for these things. The tests are good.

Colleges and universities are not trade schools. The main diffeence is our insistence upon General Education. The engineer and the Com Studies major must confront philosophy, literature, foreign languages, etc. Our reason for this insistence is simple: we take it to be true that intellectual development is not soley reducible to propositional or instrumental knowledge. Yet it is only the propositional/instrumental that measurement tests are capable of measuring.

We are therefore in a position in which we need to ask ourselves whether our insistence upon "General Education" is wise. Is this what a College/University needs in order to teach its students? Obviously, as a professor of Philosophy you can guess what my answer would be to that question. But I'll save the lecture for now. I'll merely say this: fifty years ago General Education was a 60-hour commitment. We're now down (on average) to a little more than half of that. And the measured outcomes don't look good. I suggest a correlation exists.

I also echo a previous commentor - We can't hold colleges accountable for what happens in k-12.

Goodness, my philosopy majors write an 80-page senior thesis, yet There is an ED class on my campus where the final exam is writing cursive on the board.
 
As an alternative to test scores/retention rates (which often lead to the lowering of standards, in my opinion), I will propose an outcome assessment indicator for all of you in admin: what about faculty satisfaction? Self-serving as it may sound, faculty's interest in improving instruction is mostly aligned with what outcome assessments are intended to measure. The tough teacher may have fewer people passing her course, but she may be satisfied with how she better prepared the more motivated students for a career in engineering, by using innovative teaching strategies. This will still allow colleges to keep the "boutique" programs that DD mentioned in his original post. They may be costly, but people who are involved get a lot of satisfaction.

With all the furloughs, pay cuts, and hiring freezes, the least we can do is making sure people are still happy doing the job they sign up for. I surely hope we will not repeat the mistakes of the K-12 accountability movement (if you have not read Diane Ravitch's book, I'd highly recommend it).
 
Also see Paul Campos' "The Obesity Myth" for more research on the overstatement between obesity and health.

I'm puzzled by your argument. You criticize faculty who don't adopt reform practices of relying on personal opinion instead of research. Yet the report cited suggest that "student-centered" reforms don't significantly improve learning.
 
[quote]Apparently, a national study has found that colleges that have signed on to ATD have not seen statistically significant gains in any of the measures used to gauge success. [/quote]

...

[quote]Many hold on to the premodern notion of teaching as a craft, to be practiced and judged solely by members the guild. As with the sabermetric revolution in baseball, old habits die hard, even when the evidence against them is clear and compelling.[/quote]

What evidence? I'm not trying to be flip; if the new stuff doesn't work better than the old stuff, then it . . . doesn't.
 
I'd love to see you elaborate on what it means to have a "strong IR staff" in this context. Do they continue to maintain a multi-decade longitudinal measurement of specific outcomes that is akin to keeping baseball statistics, or do they collect what is asked for, once? Are the results on trends distributed annually to the faculty?

I look at what Asst Prof wrote as an indication that a Dean, chair, and mentor didn't do a good job of getting across the history of assessment. Do you know what "Quality Improvement" program was developed a decade earlier, and what the results were of the outcomes assessment required from that round of reaffirmation of accreditation? Probably not, since we have pretty good communication at our CC but all the negative results from our plan were swept under the rug. The only indication we had that they weren't working was the silent phase-out of parts of that plan. Similarly, data that drove what we did a decade ago were not updated to see what has changed.

I'm glad to see that story about "AtD" because a culture of evidence is critical to looking forward and back. But be sure to ask the right questions. For a CC, the right question is whether your AA graduates succeed after they transfer, not just that they got an AA degree.
 
Math guy, you should consider asking them to do long division or "long" (3 or 4 digit) multiplication to see if any strange or non-algorithmic approach is used. You might be starting to see students who used a "discovery" approach to arithmetic and never learned fractions rather than students who had a teacher who never knew fractions.
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?