Wednesday, January 02, 2013
Data and Craft
I still remember the terror and thrill of having my own class to teach for the first time.
It didn’t pay much -- this was graduate school, after all -- but the autonomy was wonderful. When I closed the classroom door, it was just me and the students. The course had goals of its own, of course -- anyone teaching Intro to American Government has to go through “Congress is divided into the House and the Senate” whether they want to or not -- but how I got there, and how I framed much of the content, was really my call. I went through a painful, if inevitable, bit of trial and error, but eventually found my stride.
After far too many years of being on the other side of the desk, and then a few years of t.a.’ing for some people whose styles and choices, um, let’s go with “were not my own,” I relished having the chance to do things the way I thought they should be done. When they worked, it was gratifying beyond belief; when they didn’t, at least I had the autonomy to make a change on the fly. That mattered.
(In administration, sometimes I miss that autonomy. In administration, the constraints are greater, and successes tend to be partial, collaborative, and often indirect. There’s nothing quite like the rush of a class that really nailed it. The closest I get to that now is when a blog post really nails it.)
I thought again about that experience yesterday as I read about an appealing new project that proposes using carefully crafted analytics to improve student learning outcomes. There’s nothing inherently sinister or silly about using documented, aggregate results to drive improvements; in most fields, that would be considered common sense. In fact, there’s a perfectly intelligent argument to be made to the effect that evidence-driven reform is one of the most promising avenues we have for raising student achievement. I’ve made that argument myself before, and still believe it.
But to someone in the classroom to whom autonomy is a major benefit of the job, evidence-driven reform can look an awful lot like someone else telling you what to do. If you’re sufficiently pessimistic, it can even look like de-skilling the faculty. Even if you don’t see it as a stalking horse for a shift from artisinal to mass production of education, it can still feel intrusive. And to be fair, much of it is in the early stages, in which it may not be as precise as one would like.
The trick, which I’m still struggling with, is to find ways for faculty to be able to make those findings their own. If they can draw benefit from the information without surrendering their autonomy -- a key source of new experiments anyway -- then we’ll be where we should be. In the best of all possible worlds, this kind of information would be a resource to be used for improvement. But right now, it’s often rejected out of hand in favor of personal observation and an appeal to authority.
Wise and worldly readers, have you seen a use of analytics or evidence-driven reform on campus that didn’t raise hackles? If so, how did it work? (Alternately, have you seen a de-hackling process that worked?) Anything useful is welcome...
As a physicist, I've used data to test theories for most of my life. It is natural to do so in the classroom as well ... and that is where I discovered why research in the social sciences is such a challenge. You lack controls, so it is impossible to repeat the same experiment like you can in a physics lab. You can do it, but there are limits to what you can learn from your observations.
That has led me to look more closely at the different skills and weaknesses of each class of students and find ways of adjusting to them as early as possible. For me, the variables that confound any large single-variable analysis are where the real meat of teaching is found.
[Side comment: I've noticed that a variety of reforms are found to totally rock if every student has a minimum verbal and math SAT of 600 (each). The ones to watch for are ones that can work with an audience as diverse as is found at a CC or can be tailored to the sub-populations within a single classroom.]
As for your question, I've seen evidence driven reform tried at my college, with some success and some backlash and some coverups when one fails. IMHO, stated here before, the key to the success of any such enterprise has to be an honest acceptance of failure. Experiments that don't find what you expected tell you as much or more than ones that "work".
PS - I am under no illusions that classes at a community college are a mass production process. Even when my classes are "small" (by our standards), there are too many of them (hence too many total students) to confuse what I do with the work of an artisan.
I'm no longer working for CPIT (we've had some major earthquakes) but they were hugely supportive, and their methods are research based - with ongoing research. You may want to see what they've got on the boil at the moment.
A key nudge to raise the organisations standards was the support for the Certificate in Adult Teaching, and strong course evaluation.
At the time I was there (I'm not sure about now) staff were able to do the Certificate in Adult Teaching, and then the Diploma in Adult Teaching for free/heavily subsidised. And, importantly, there was an automatic salary bump when you completed the Certificate (in fact you couldn't progress up without it I believe). Part-timers also go the pay bump (I was what you'd call an adjunct).
The Certificate is highly practical and applied and includes in-class teaching observations from the Adult Teaching team (who are there to support your teaching ,not critique you for salary evaluations).
The focus was always on how students learn, teaching the students you had, not a mythical student in your head, strong planning,and maintaining student engagement.
I've moved on to other things, but you may want to contact Gerry Duignan or Selena Chan, and see what ideas they are working on.
Selena has a blog where she records her thoughts on her research, http://mportfolios.blogspot.co.nz/
They've done a lot of work about integrating e-learning and mobile technologies, which was put to the test when the institute was going through major earthquakes which forced a campus evacuation.
Key points to note:
- NZ's system is different. It's heavily outcomes focussed, where the New Zealand Qualifications Authority (NZQA) provides accreditation. The accreditation is based on an estimated time of study, but this can be a combination of in-class and personal study - so there isn't the bizarre face-time credit-hour basis that you seem to have.
- When I was there, there were 2 faculty unions and you could choose to join either.
What sold me on assessment was a workshop where an assessment professional used to dealing with Humanities departments spoke to Humanities faculty directly about how to go about the work, how we're doing it already, and how it is important to our own pedagogical, disciplinary, and institutional goals. This expert sold us on using what had been perceived as a burden as a tool for improving our classes. The expert also made it clear that humanists need not become social scientists, which was very helpful.
A secondary assist came from a local convert to assessment from the Humanities ranks. This professor had served on a couple of outside departmental evaluation teams and had seen what can happen without assessment in place to ensure that goals were being met. This trusted colleague's persuasive representation of the process was invaluable for selling the process to me.
I just wish we could do something similar for the technology issue. I'm trying to push-pull the conversation in that direction myself now.
But this sense of classroom autonomy is rapidly disappearing in the current environment. Very often, you are forced to teach a largely “canned” curriculum, one that is designed by teams of instructional specialists, and you are reduced to the status of a performer who is forced to read the works of others in the classroom. If you are teaching in an online environment, the material presented in the class has been largely designed by others, even down to specifying the problem sets, the exams, and the student exercises. The thrill of being able to create and design your own class is gone.
If you are a part-timer or if you are untenured, you are at the mercy of student evaluations and you must make sure that you are not too strict a grader or too demanding of your students, lest you get a bad review on RateMyProfessors.com and get into trouble with the administration because they think that you might be negatively affecting student retention rates. Lest you get bad ratings, you must run popularity content and behave more and more like an entertainer or a stand-up comic in the classroom, one who keeps their students amused. You have to give high grades whether they are deserved or not, lest an irate student complain to the dean and perhaps get you fired.
Perhaps the most potent threat to classroom autonomy is outcomes assessment, the latest academic fad. Someone else—perhaps someone who knows nothing about your particular discipline--now tells you how you have to teach your course. You are no longer trusted, and you must now prove that your students really are learning what you are trying to teach them--the fact that you give grades is apparently not enough. According to the outcomes assessment weenies, measurable rubrics must be established, and these rubrics have to be so tight that one can actually measure whether they have been met or not met by your students. Very often, these rubrics are imposed on you from on high, and you have little voice on how they are written or specified. You are forced to change your class so that you are teaching to these externally-imposed rubrics, so that you are now largely teaching to the test much as in No Child Left Behind. Outcomes assessment is often imposed in a threatening manner—“Professor, if you don’t take this very seriously and do outcomes assessment right, we can lose our accreditation….”
Because of these fears and concerns, a lot of faculty members will either fake the assessment results or will simply make up the numbers, just to keep them out of trouble. This leads to a corrupt system in which everyone in the process knows that the whole assessment game is fake, but they keep on playing it anyway, just to keep out of trouble. This leads to an awful situation in which everyone is lying to everyone else all the way up the chain, much as in the Five Year Plans in the old Soviet Union. Everyone knows that the numbers don’t mean anything, but they report them just the same. This sometimes leads management to make hasty decisions based on bad data.
That is most definitely not the case at my college. We, the faculty, develop the Outcomes and design the Assessments. We discuss all of the above across the curriculum (including across disciplines when a pre-req chain crosses the math-science boundary, but have yet to explore the English-science boundary) and I find that it restores focus on what we all agree is the most important parts of what we are trying to accomplish.
I would go further and argue that any college where the outcomes and assessments are dictated by persons without expertise in the relevant field should lose its accreditation.
Ownership really is the thing, here. I'm perpetually struck in these conversations by the way faculty talk about their relationship to the university that pays their salary. When you work for an organization, you work to further the mission and goals of that organization--which is a moving target, lets be honest. But so as a professor, you are not a free agent. You have a job. And part of your job is building a cohesive and effective curriculum.
That's going to require teamwork. I teach secondary school and my boss is extremely reticent to hire former academics because they so often exhibit a toxic refusal to consider the bigger picture and work in a team.
Faculty have got to make something beyond their own classes and their own research their business. And they've got to get on board with the idea that they do actually work for someone. Just like regular people who have regular jobs.