Wednesday, April 02, 2014

 

Pilots



Wise and worldly readers, how much time -- and how many attempts -- do you allow a “pilot” course or project before deciding whether to keep it?

As finances get tighter and political pressures stronger, I’m seeing less patience for waiting for the results from pilots to come in.  The meaning of the word is changing.

In my original understanding, a “pilot” was an experiment.  It was designed to see if, say, a course had staying power.  Did students respond to it?  Did it achieve what it was supposed to?  It might take a couple of attempts to really get a good read, especially if the goals are ambiguous or in tension with each other.  A new math class might require a few semesters of follow-up to see how students fared later in the sequence, for example.

But I’m seeing “pilots” now used as something closer to “dress rehearsals.”  In this version, the idea is to debug before scaling up, but the goal of scaling up is pretty much a given.  In other words, the meaning has shifted from idea-testing to implementation-testing.  We’ve already decided that the show must go on; we’re just making sure we get the blocking and lighting right.

There’s value in both versions of pilots, of course.  But mistaking one for the other can lead to unhelpful conflicts.

In some cases, the shift in meaning is entirely unconscious.  It happens largely as a result of scheduling.  Let’s say we run a new math class in the Fall of 2014.  Faculty schedules for the Fall of 2015 are done by mid-Spring of 2015.  At that point, we have exactly zero data on how students in the pilot class did in the following course, but we have to make a decision anyway.  

Alternately, a pilot held in suspension for an extended period -- if your bylaws allow -- accrues a sort of tenure.  People start to count on it.

So I’m trying to find the right balance.  Has anyone out there cracked the code?

Comments:
Here's an example, though I'm not sure if this is optimal.

At my R1, pilot courses receive a special designation and may run 3 times before they must receive re-approval as a regular course. I don't think there is a restriction on the time that elapses during those 3 trials, i.e consecutive is fine, or 1x/yr for 3 years, etc.

Because of this temporary nature, they cannot be mandatory in a curriculum. They can serve as electives.
 
Similar to HSlpoDD (dude, you need a shorter handle), I've seen courses "piloted" 2-3 times before going to a formal review by the curriculum committee. This is to allow flexibility in offering something new, to try out a different format, etc. (Particularly if you have visiting scholars who are contracted to teach as part of the deal--but who can't teach the 150 person intro class for whatever reason).

I also agree that this can be away to do an end-run around the process to approve courses (if the curriculum committee is not meeting/dysfunctional/whatever). "Hey, we've offered this three times, here are the evaluations, the grade distributions, and it meets a curricular need--why can't we formalize what we're already doing?"

In my current work, I coordinate the credit component of an internship experience. I'm in my 3rd time offering this opportunity, and technically the review committee needs to determine whether or not this course will be permanently included. I'm told they have bigger curricular fish to fry and to keep on offering it because a) students value it b) it fits with the university's mission of experiential learning and c) better to beg forgiveness than ask permission.
 
Sometimes the dress rehearsal use of pilots is due to a top-down mandate. In my experience at a CC, many reform initiatives are decided at the state level. Individual colleges run pilots to determine the best way to implement the new policies, procedures, and curricula. Trust me - many faculty would love using pilots for their initial intended purpose of determining whether or not a new direction was worth pursuing.
 
I wonder whether part of the difference in what is meant by "pilot" comes from differences in background. In chemical engineering, new processes are first run on "pilot scale" in a "pilot plant" to uncover issues that will need to be solved prior to full scaleup for manufacturing. It sounds like someone using "piloting" a course with the intent that it *will* be scaled up is alluding to this meaning.

In terms of the original question, how many times should a course be run before a decision should be made, the informal practice here has been three times. If a class hasn't found its niche after three tries, it's probably not meant to be, at least in that form.
 
I agree with Anon 8:02 - three tries are enough to have data about effectiveness. Perhaps you could use different terminology to distinguish between the two types of pilots you see. An "experimental course" would be one that was a new "let's try it and see" sort of course. A "pilot" would be a dress rehearsal for scale-up.
 
Where I used to be, we often introduced new courses with an X-course number (X201, for example). X meant "experimental," and X courses could not be required for a major (or as I recall to fulfill gene d requirements). A course could be offered only for 2 years (fall, spring, summer, rinse, repeat), so a max of 6 offerings (which could be more than 6 sections). Then it had to go through the formal curriculum approval process (for real--courses could get X-approval, and sometimes the curriculum committee would suggest that instead of final approval).

What we had were problems which kept introducing more and more courses, which they had no way of staffing on a regular basis, and no way of offering more often than maybe once every three years. (Some of us who were serving on the curriculum committee suggested that they not do this, but...)
 
Uh, that's "...programs which kept introducing more and more courses..."
 
Here at Proprietary Art School, the curriculum is decided by top-level corporate management, with very little faculty input. We are currently going through the introduction of an online introductory remedial mathematics curriculum. The curriculum consists of students doing their homework and taking mastery exams, all online. There is no in-class instruction. The curriculum utilizes a Pearson online math software package, and has been imposed on all the branches of our school by top-level management.

The basic idea behind the introduction of the new math curriculum seems to have been to save a little bit of money, with our top management believing that replacing the old-fashioned bricks-and-mortar classrooms by an entirely online curriculum will be a magic bullet that will save a whole boatload of money and will at the same time also dramatically increase student enrollment. I suppose that the basic idea behind all this is to replace expensive full-time faculty members by poorly-paid part-timers who are little more than facilitators who wander back and forth in the classroom, looking at student huddled in front of computer terminals. Perhaps the ultimate goal is to move the school’s entire educational program offshore, where the facilitators will be sitting in front of computer terminals in places like Bangalore.

Most of the students seem to really hate this new online math curriculum. The pass rate is rather low, with relatively few students having been able to complete the sequence, and most dropping out of the course before completion. In order to be successful in an online self-paced mastery-type program, the student needs to be mature, must be well-motivated, needs to be a good scheduler of their time, and must be able to avoid procrastination. Very few students needing remedial math education fall into this category, and the dropout rate from programs of this nature tends to be very high. The money supposedly saved by moving the curriculum online will probably be eaten up by the increased cost of computers, the cost of the software, and the lost tuition from so many students failing and dropping out altogether. Most of the faculty “teaching” in the online math course bitterly resent being reduced to the role of facilitators, wondering why they are in the classroom in the first place.

But this online math curriculum isn’t really a pilot program in the sense that Matt describes. It seems to be a program that is already set in stone by upper management, and no one up there will dare admit that the whole thing is an unmitigated failure. It will continue, no matter how many faculty members are dissatisfied with it. I suspect that some top executive’s career would be ruined if it were admitted that the whole curriculum was a really bad idea in the first place. So we are stuck with it. There is even talk of moving our introductory English sequence online, and Pearson is already advertising a purely online physics curriculum. So in the not-so-distant future, only students at top-level SLACS or at R1 universities will have the luxury of being taught in a bricks-and-mortar classroom by a real live instructor.

 
I think we have three distinctly different kinds of pilots.

The actual "pilot" could be special sections of an existing course that are being taught in an entirely new, perhaps radically new, way. The first year is a true pilot, while the second year pilots how you will scale it up to other ft and pt faculty if data support the change.

Another kind has a standard course number, but is taught as an elective while gathering data on student interest and outcomes to see if it warrants becoming (say) a gen ed class. There are no limits on how many times this class runs, but they usually get killed if they don't draw an audience as a gen ed class.

Finally, we have special topics classes that are like the X classes mentioned above. They cannot be run more than twice without getting a regular course number after full curricular review.
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?