Monday, July 14, 2014

 

Disruption or Incorporation?


The interwebs lost their minds a couple of weeks ago over Jill Lepore’s article in The New Yorker about disruptive innovation.  Lepore argued that Clayton Christensen’s formulation of “disruptive innovation” typically only worked in retrospect, and even then, it required selective reading.  Frequently, incumbents who are initially threatened by potentially disruptive innovations wind up incorporating them into their own operations.  This is Bill Gates’ famous memo about shifting focus from PC operating systems to the internet, which got Microsoft to focus more Internet Explorer than on Windows.  More recently, it’s Facebook buying Instagram.

My sense of the lesson to be learned is that incumbents who can adapt are likelier to survive and thrive than incumbents who simply refuse to acknowledge anything new.  

In that light, I read yesterday’s Chronicle piece about competency-based education a bit against the grain.  

The usual narrative about competency-based education -- and I’ve fallen into the trap myself from time to time -- goes like this.  Replace classroom instruction that involves a single professor and a uniform clock with individualized/atomized/automated online instruction and a series of tasks, and you will unleash the mighty potential of many who have been held back by an industrial-era production model.  (Alternately, the “anti” view would argue that it’s mostly an excuse to further deprofessionalize faculty in the name of cutting costs.)  

Framing competency as completely new and different raises the stakes.  Running a competency-based program requires completely rethinking how work is allocated and measured, how success is defined, and how financial aid is handled.  (Anyone who brushes off that last point as a technicality has never worked in academic administration.)  

I’m wondering if disruption is the most helpful narrative here.  What if the right narrative is inclusion?  Instead of either manning the barricades or blowing everything up and declaring year zero, what if we incorporated competency-based education into what we’re doing?

Here’s a version of what that might look like, though I’m certainly open to other versions.

What if a campus had its faculty run series of workshops/presentations/seminars on the topics in which students would eventually have to demonstrate competencies, but decoupled the workshops from the demonstrations?  Put differently, what if we separated grading from teaching?

The idea would be that students could attend as many, or as few, of the workshops as they thought they needed.  (Obviously, some level of intensive upfront advising would be necessary to make this work.)  When they feel ready, they demonstrate their mastery of the competencies through whatever projects or exams are appropriate.  Presumably, some of those workshops could be online, some could be onsite, and some could combine the two.

A campus would become a de facto learning lab, in which faculty offer scheduled -- but probably short -- workshops or classes for those who thought they might be useful.  Students could take the ones they see as relevant, even repeating as necessary.  “Satisfactory Academic Progress” for financial aid purposes could be established by setting a minimum number or percentage of competencies that have to be achieved every, say, six months.

In this model, faculty aren’t reduced to graders; they still teach.  Students seek out the most necessary and/or interesting subjects and instructors.  Online resources -- whether MOOCs or anything else -- would be made available on a guided basis as supplements.  Students who already have most of what they need in a given area could place out quickly; students who need extra help could come back again and again.  

In a sense, this model would shift the faculty role from “dispenser of rare information” to “sherpa through mountains of information.”  As such, it would come closer to acknowledging the reality of a world in which people have Google on their phones.  Institutions would still need to provide certain kinds of high-touch support, such as advising, and I imagine that co-curriculars could continue much as they already are.  But allowing/compelling students to decide for themselves how much instruction they need would both liberate the high achievers and allow students with unique learning needs to move at a pace they could actually handle.

The useful metaphor here may be the “blended” or “hybrid” course.  Courses that include both onsite and online elements tend to lead to better learning and completion outcomes than courses in either format alone, because it’s possible to get the best of both.  Could it be possible to take the best of both competency-based and traditional instruction?

Comments:
Spot on. Same policy as Western Governor's University. Split teaching and grading.


 
How did you write this column without mentioning Baumol? !?!

I consider myself blessed to have missed whatever flamewar arose from that article. The article's strongest point concerned data selection, not selective reading as you wrote. There was only a minor mention of its application to something like education (where students are not manufactured) when the model isn't even predictive in the economic arena where it originated.

I fail to see what is all that "new and different" about a concept that is 60 to 70 years old. What has changed over those years are how something like programmed learning (more than 50 years old) is implemented and how and by whom the competency is evaluated. Credit-by-exam tests do it in one shot, many on a nationally-normed basis, while others use modules and/or final exams that are developed for a specific class.

The accreditation issue is, indeed, how success is defined. My position is that we define it the same as we do for a pre-existing class and measure it against the same set of learning outcomes. (Outcomes assessment strikes me as particularly valuable in this context, but it needs to be done in a robust rather than casual they-made-me-do-it fashion.) This is trickiest when no "regular" class exists, as I know I have written here before.

Large industrial education firms are building the data sets that can be used to vastly improve the limited heuristics behind earlier attempts at programmed learning, but competency-based evaluation is (as noted above) totally decoupled from any teaching methods that may or may not be associated with it. You don't have to take an AP class to take an AP exam.

BTW, a campus learning lab with independent evaluations is pretty much the way we are now doing much of developmental ed at my CC. The computer programs are getting pretty good and we have a totally independent assessment.

The only innovation nationally appears to be having a program that is entirely competency driven, like at SNHU and WGU.

And I agree totally that it complicates faculty assignments and everything else you might want to imagine at a college. It definitely requires a redesigned ERP if you are to go from small-scale hand-managed classes to a full program of this type. It also requires serious thought about financial aid. Do you get supported to take a year to learn one semester of history while someone else gets only two weeks of rent covered when doing two semesters in that period of time, or will you get 4 years of living expenses while finishing your degree in one year?
 
A campus would become a de facto learning lab, in which faculty offer scheduled -- but probably short -- workshops or classes for those who thought they might be useful. Students could take the ones they see as relevant, even repeating as necessary.

I like the idea of this, but only if it's coupled with some sort of deadline. Dan Ariely's Predictably Irrational chapter on procrastination is relevant here: a lot of people, perhaps sometimes including the author of this comment, tend to delay until they can't.

Certainly I've observed a lot of that behavior in my own students, and the more I teach the more I'm struck by the gap between the people who are most optimistic about various educational alternatives and what I see on the ground.
 
Thank you for sharing your recursive review of Disruption or Incorporation...

We have a number students who spend a semester (usually after year 3 of engineering) at a German partner university. My impression of the course system there (on the basis of advising meetings) is that it follows a similar model. Class attendance and homework are optional. The semester-long course grade is based on the performance on a final exam, and it is not uncommon for students to fail a course the first time and then to take it again.

Apparently this works well for the German students. Some of our students get in trouble if they aren't self-motivated.

The exam occurs only once (end of term, as a usual final), so their approach does not necessarily allow the time acceleration of demonstrating subject mastery.
 
I concur with the earlier comments: this is an excellent model for learning, but it's not exactly a new one, and it works best for self-disciplined students (and those without a lot of other more-urgent calls on their time, from employers, peers, family, coaches, etc.). It's also a labor-intensive, and hence expensive, model; whether you call them professors or teaching assistants or learning coaches or academic advisors, most students will need someone paying close personal attention to their progress, and advising them as to next steps (and ways to recover when things don't go so well). Proponents of education-via-big-data will probably argue that someone lower-paid than a professor (or maybe even a computer program) can provide some of that guidance, but I'm not so sure; I still think education works best when the person or people who do the assessment also create the curriculum (or at least are very closely in touch with each other). The other problem with the cost-saving idea, of course, is that most intro./core courses are already taught by faculty members who are very cheap, for all that they're considered "professors."
 
"Students seek out the most necessary and/or interesting subjects and instructors."

Or "easiest," which would be a serious problem unless you plan to cap the number of students who can "seek out" any given instructor.
 
This method of education has much in common with what's common in the UK, and much of Europe. Your degree is based not on continuous assessment, but on your performance in a set of tests. At least in the UK, the last 40 years or so has seen a shift in practice, to degrees that combine some continuous assessment and some exam work. I can see the advantage of having outsiders grading the work -- but there are also problems. You pick up things with continuous assessment that are missed by the examination system. And especially in the humanities, it requires a shared syllabus and set of expectations for each course, which would be a huge culture shift.
 
Isn't this kind of what already happens in Very Large Universities? For large classes, (where "large" == wherever taking attendance becomes a burden), students choose to attend or not attend classes, but must come to the exams.

In that sense, the class meetings are the workshops you mentioned, and the exams are the demonstration of competence.

Not that this solves cost issues - it's just an existing (American) demonstration of the events your interested in.
 
I'm skeptical. Here's why.

It is obvious to anyone who has followed DD's other posts on competency-based education, the credit hour, self-paced learning, Baumol, etc., that his motivation behind this post is to make education more "productive", defined as getting students to learn a semesters worth of material and skills in a significantly shorter time.

But how many of students are really capable of completing a 15-week course in only half that time? Those would be the students who in the current system whiz by with A's and A+'s without really having to study or complete homework -- that's not many students at all. I've taught some CC classes where only two or three students have earned an A. I've also taught CC classes where there are a few A students, but many of them work really really hard to get their A and would not be able to accelerate that same work.

Also, I wonder if those who speed through the course in a few weeks are really getting the same value as someone who had 45 hours to discuss the content with their peers and with a content expert. At one time I earned a bunch of credits through CLEP exams -- partially because I knew the materials, but more because I'm a good test taker. I guarantee you I would have learned much more if I sat through the classes on those subjects.
 
CC Bio Prof, I do not at all see DD's mission as enabling students to whiz through the material. Some students need MORE time than classes give them; self-paced learning helps them too.
 
Zora,

I agree that many students need more time than a semester to learn the material, but I am focusing here on DD's spot in the context of his other posts about the "productivity" and Baumol.

The typical logic is that higher education cannot become more productive as long as we measure achievement in a time unit (i.e., credit hours) or even a set number of courses passed when each course is defined by time (i.e., semester long, trimester long, etc.).

My point is that if the majority of students are not significantly increases increasing the speed of course completion, then we have not become more "productive".
 
Zora @11:44AM

My allusion to Baumol and BioProf's longer remarks allude to a consistent theme for at least 5 of the 10 years Dean Reed has been writing this blog. Productivity. Take a look.

IMHO, competency-based credits can significantly reduce the time (and perhaps even the cost) of college for students returning to school with substantially improved life skills. That appears to be the audience targeted by SNHU. Simple CLEP tests do not have to cost as much as a regular semester class and do not require any faculty at all once the test is deemed to be aligned to the course outcomes.

I agree that many of our students need (expensive) personal interaction with a professor to fill in the gaps in their preparation. Whether the influx of GI Bill veterans and the next generation of HS grads will need more or less of this is unclear to me. Some vets who appear to need remediation are now such hard workers that they can do so in a compressed format, and Common Core has the potential to reduce the gap between HS math grad requirements and college math expectations.
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?