Wednesday, February 23, 2011

 

Trust Us, We're Experts

Historiann has a fascinating, and I think largely representative, take on a provocative article in the Washington Post about “fixing” higher education. The original piece outlines eight steps that it argues would make meaningful differences for colleges and universities in the US. Some of them are easy and obvious, like toning down the focus on athletics; others are deeply problematic, like junking merit scholarships. (For my money, there’s something fundamentally wrong when having a good jump shot is a surer ticket to tuition than building a strong record at chemistry or writing.)

The first one is somewhere in between. It’s “measure student learning.” Historiann dismisses this one out of hand, with a quick reference to No Child Left Behind and the following: “Let’s just strangle this one in its crib unless and until we get some evidence that more testing = more education.”

It’s a fascinating response, because it encapsulates so cleanly the unthought impulse that many of us have. Testing equals Republicans equals bullshit; now shut the hell up and write us large checks. Trust us, we’re experts.

It’s written a little more carefully than that, of course, but written specifically to defeat verification. It rejects any sort of “measurement,” but does so by calling for “evidence” that measurement works.

What would that evidence look like? Might it involve, say, measurement? If not, then on what basis could you use a term like “more”? Every meaning of “more” that I can fathom involves some sort of comparative measurement. But to do that, we’d have to agree on a measure. Unless, of course, that was simply a rhetorical flourish, a semi-ironic acknowledgement that such a thing could never be proven because, well, it just couldn’t.

The knee-jerk response to any sort of accountability rests on a tautology. We know better than anyone else because we’re experts; we’re experts because we know better than anyone else. Screw measurement, accountability, or assessment; we already know we’re the best. Just ask us! Now, about that check...

If the folks who care about higher education are even halfway serious about avoiding the traps K-12 is in, the first step is not repeating the same mistakes. “Trust us, we’re experts” simply is not a persuasive argument to the larger public. It may once have been, but it isn’t now, and it hasn’t been for a long time. The difference between Historiann’s perspective and my own is that she seems to assume that failure to defer to rank is the public’s shortcoming; I think it’s basically healthy.

Part of the reason that Academically Adrift has resonated as much as it has, I suspect, is that it argues something that most of us (and most of the taxpaying public) secretly know to be true: many college students skate through without getting appreciably smarter. I consider that a major problem, and one that would require some pretty fundamental structural changes to higher education to address.

Oddly, many of the same people who share Historiann’s dismissal of testing are among the first to decry poor student performance. We expert educators are expert educators, if we don’t mind saying so; therefore, any student failings must...wait for it...be the fault of the students! In fact, they’re getting worse all the time! Now, let’s talk about next year’s tuition increase...

After a few decades of that, the public is getting a bit, well, testy. And well it should.

At base, the popular perception that college is a scam can’t be ameliorated by assertions of expertise, truth, and virtue. If those worked, they would have worked by now. It will be ameliorated, or not, by showing the public some kind of real results. What those results should be is certainly open for debate; as a kid, I remember seeing the space program justified by the development of calculators and digital watches. It might take the form of some sort of exam, or it might take the form of success stories, or it might take the form of new graduates developing wonderful things. Which path to pursue strikes me as a fair and valid discussion. But if we don’t recognize that the basic impulse behind the testiness is essentially valid, we won’t get anywhere. Aristocratic pretensions aren’t gonna cut it; the “appeal to authority” isn’t terribly appealing. We need to show, rather than tell, the public that we’re worth supporting. Which means we need to show ourselves first. Strangling that impulse in the crib is not a serious answer.

Comments:
And yet, you have to admit that testing - especially standardized testing - gives only the vaguest indications of whether or not something has been learned. The primary thing testing shows is how well a student has evolved in the educational ecosystem to take tests. I'm not going to go as far as HA and say you should just take it on faith that instructors know best, but there has to be a middle ground. So I pose it to you (and your readers): how do we measure student learning better? With tests or without, your call, just explain how to make a better judgement.
 
K-12 is sitting on a gold mine of information, if they were just smart enough to use it.

students are tested almost every year (especially in grades 1-6), and those scores are used for bonuses for the schools and to decide whether students are ahead or behind.

if the schools would open Excel, and plug in the scores, they could easily see where there is a consistent dip in test scores. if students consistently average in the 80% passing range in grades 1-4 mathematics, but they average in the 50% in grade 5 every year, you've got yourself a bad teacher. you might have bad curriculum, but if other schools don't have that [consistent] dip in the same grade, then, bingo, it's the teacher.

granted, there are off years and odd ducks, but the numbers are there, and they don't lie.

public education is going to be a losing battle, and it is only going to get worse. why? because the school system relies on students to perform, not teachers. no matter how great a group of teachers are, they don't matter if the students aren't trying & if parents aren't helping. same goes for curriculum.

teacher/school effectiveness is gauged upon student performance, and education is consistently declining in value (socially).

the big question is, if you knew everything there is to know about why students are and aren't performing, what do you do for those ones who are falling short, and just don't care to try harder?
 
For my money, there’s something fundamentally wrong when having a good jump shot is a surer ticket to tuition than building a strong record at chemistry or writing.

I agree, but if you follow the Ivy League model, athletic scholarships are also merit scholarships, and are thus disallowed. I'm not sure I am totally on board with the idea, but they do get my respect for consistency.
 
As post-secondary educators, we can determine our future or let it be determined for us. Our university is doing a pilot study for one of the national accrediting agencies even though not all faculty are fully aware. And that is not because we have not been informed. It is a matter of wanting to listen.

Just saying "no" will no longer work. Students should be learning more. Interesting "counterforce" to grade inflation. We could use this push for accountability to ensure more academic rigor in our classes. If we are only willing to lead instead of complain and dig in our heels.
 
Thoughtful, interesting and unexpected. Thank you for sharing your perceptions.

A bursting of the higher education bubble seems to be underway. On the other side I expect to see many fewer institutions, with much higher standards and fewer graduates. Only then will having a college degree mean something again.
 
It would be foolish to get rid of Merit Scholarships, but the original author had a reasonable point: these scholarships are frequently affirmative action for the well-situated. I absolutely think we should reward high achievement, but it's difficult for a student who needs to keep a part time job, can't afford prep classes and/or attends a poorly funded school to compete against kids at well funded schools (public or private) whose only focus is to pad their college applications.

I know this isn't a new observation, but I found it interesting and frustrating that CC's didn't really fit into the author's equation at all. If the question is "how to fix higher education", then why the hell weren't CC's part of that solution?
 
This comment has been removed by the author.
 
"if students consistently average in the 80% passing range in grades 1-4 mathematics, but they average in the 50% in grade 5 every year, you've got yourself a bad teacher. you might have bad curriculum, but if other schools don't have that [consistent] dip in the same grade, then, bingo, it's the teacher."

well, not necessarily a bad teacher -- a teacher who doesn't gear their teaching to the standardized test. Not the same thing.
 
"On the other side I expect to see many fewer institutions, with much higher standards and fewer graduates. Only then will having a college degree mean something again."

Maybe it will mean you were wealthy enough to get a good K-12 education.
 
I also read Historiann's post with the comments. Something that disturbed me in both was a hint of disdain for the undergraduates as well as undergraduate courses. I've seen this in a number of other academic blogs. You can see it in many of the comments to the previous post on her blog, about the gendering of research. A number of commenters stated that their teaching added little or nothing to their research. And since research is what brings jobs, promotions, and prestige, well, there you go.

Students do need to be pushed, and it's true that they'll try to game the system and/or look for the easy way out, but in my experience (teaching at R1s as well as SLACs) is that a good number of students want to learn, but also want to be treated fairly and with respect. They know very quickly if their professor thinks that they and the course are beneath him/her.
 
I'm wary about measurement programs and I'm frustrated that almost every "testable" scheme I see for my discipline, history, rely on rote memorization and a very rigid, Whiggish concept of history. Even in the sciences, the urge in measurement is to define X, Y and Z equations or approaches as "key knowledge components" and make the test and, thus, the course, all about those specific elements.

When faculty are building courses and crafting assignments, we're often trying to challenge a status quo and keep students on the cutting edge. Even if we're not research-intensive, most academics revise and rework their teaching in light of new insights and information.

Yet I've seen post-secondary standards proposed for my discipline that try to pin down the "five key concepts" of antebellum American history, for instance. These schemes seem much more akin to what's shopped out to K-12 teachers. Here's a curriculum. Here's the textbook. Here are the assignment structures you can use. Good luck finding wiggle room there to give students a chance to really get their teeth into something personally exciting and that reflects the specific expertise you have to offer!

If I have to keep teaching to the same "key concepts" and outcomes so narrowly defined by a panel of a homogenized whole (or worse, a bunch of educational experts without much reference to the discipline)? My students might as well just sign up for a random online course designed by these same experts because they sure aren't getting the value they deserve in a university or college classroom.

I'd love for someone to promote a useful meeting ground for specialists in pedagogy and disciplines to work together. They'd have to recognize that these concepts they'd worked up weren't universally applicable. What works best for students and faculty at your CC wouldn't be a good scheme for TR's Zenith, say.

Instead, we get books like "Academically Adrift". In one handy book, an indictment of the educational system, everywhere. More? It's an implication that one fix will solve everything. That only works in programming on a unified platform which isn't North American higher ed in any way, shape or form!
 
DD, I hate to say this after your gratuitous slam on Republicans, but the rest of the post hits it out of the park.
 
Even in the sciences, the urge in measurement is to define X, Y and Z equations or approaches as "key knowledge components" and make the test and, thus, the course, all about those specific elements.

Why is this wrong?

Faculty have a right to determine what the curriculum at a college should be but they also have the responsibility to make sure that what they are teaching is consistent with the broader expectations within their discipline and useful to their students.

Having taught as part of a credentialing program, I found it liberating to know that my students were performing better than average on the national exam for our subject. I know most of my teaching colleagues would have found my situation oppressive. Never the less, my students worked harder and (I'll go out on a limb here)learned more because they needed to pass an exam to get their credential. There was never any doubt about what I needed to teach or what they needed to learn.

testing...gives only the vaguest indications of whether or not something has been learned

Don't tell ETS! Seriously - if the cognitive objective is that the student can list three parts of a leaf - can't that be tested in a standardized exam? If I want them to know how to do a calculation or make a judgement call based on data, that can also be tested. So I'm not sure what's meant by learning here - unless you're not talking about cognitive objectives (other types of objectives can be tested other ways.)

how do we measure student learning better? My answer to this is perhaps simplistic and expensive but here goes: You define your objectives for the cognitive, psychomotor and affective domains. You test students to see if they have achieved those objectives. The end.

I will elaborate. First, you baseline everyone in all three areas to provide yourself something to compare to, using an exam and an interview with each student when they enter the major. At graduation, you measure the change in cognitive performance looking at grades and performance on a national or otherwise standardized exam. The psychomotor you test through labs or portfolios. The affective you test by looking at changes in student attitudes over time and their ability to demonstrate they have acquired the skills you've identified as important, comparing their first interview with one performed in their graduating year. The exam and interview are high stakes - they result in a notation on the transcript or a credential so the student has an incentive to do well.

You are looking for two things - change over time and the ability of the student to meet certain objective criteria at their graduation. This allows you to get credit for bringing low achieving students up to a higher standard but still holds everyone to some baseline for your discipline.
 
how do we measure student learning better? My answer to this is perhaps simplistic and expensive but here goes: You define your objectives for the cognitive, psychomotor and affective domains. You test students to see if they have achieved those objectives. The end.

This only measures short-term retention and can be defeated by cramming. Knowledge that is genuinely learned is retained for the long-term (so long as it is useful) and integrated with other knowledge of personal relevance to the student.

Not much of what we teach in college is useful or personally relevant to students. It should be, because it has the potential to transform their character and enrich their lives. But internalizing "academic" knowledge is a lot of work and uncool for most college students.
 
To play a sort of Devil's advocate (or something), here's a question: If you take the "show, don't tell" attitude about higher ed, what happens if showing the details of many college/university programs at all levels result in showing some rot and redundancy? Suppose that what is really needed are more well-trained carpenters and plumbers, and not BA holders in art history. This might result in a dramatic (or not so dramatic) down-scaling of institutions. Perhaps some smaller towns don't need more than one foreign languages program, and perhaps there is an overproduction of sociology majors.

Touching on some ideas from your post yesterday, employers want employees with a particular set of skills. A general education and general skills could be less helpful in the job market at a given point in time. (This might not be the case.) If society would he benefited by increasing the number individuals with narrow, "useful" skills instead of general set of broader skills, this is one possible result of the measurements that you'd be looking for.

Remember that there was a major expansion of post-secondary program in the 1960s-1970s with the Baby Boomers. At that time, post-secondary education had much different economic meaning. If one wants to talk about measurement, let's begin by comparing apples to apples, and not apples to oranges.
 
This is such a big subject (remediation alone would rate a multi-week blog series) that I will try to say as little as possible on one important topic.

DD paraphrases Historiann: "Testing equals Republicans equals bullshit; .... It’s written a little more carefully than that, ..."

Yes, so don't create a straw man. Historiann asked rhetorically "Because standardized high-stakes testing has worked so brilliantly at the K-12 level? Let’s just strangle this one in its crib ...."

And that rhetorical question is a valid one, because the article proposed using one of three measures: the "Collegiate Learning Assessment", the "Collegiate Assessment of Academic Proficiency" (ACT), and the "Proficiency Profile" (ETS). I seriously doubt if any of these will measure the ability to do calculus and apply it to problems involving Newton's Laws at nearly the same level as the "Fundamentals of Engineering" exam does for fresh graduates of engineering schools.

By the way, the existence of that last exam -- and its serious use when evaluating different engineering schools -- is proof that outcomes ARE measured. Pass rates on nursing boards, the bar exam, and the initial medical license exam are other examples.

These are, of course, not One Size Fits All. That may be why they produce valuable results.

In contrast, the single K-12 exit exams used in my state appear to have lowered the thinking (problem solving) skills of HS graduates due to a maniacal focus on one single type of exam that tests (IMHO) middle school skills.

What would that evidence look like?

See above, but I think it might look a lot like what many of us are developing for our accreditors, but it is impossible to measure actual (long term) learning within our budgets and existing graduation requirements. We'll stick to specific desired outcomes.

PS -
I don't quite understand Historiann's reaction to a core curriculum, since the history programs I know about have a core requirement in history that contains courses that would make perfectly suitable core courses for anyone.
 
To Janice:

You are right to be wary. That is why it is important for faculty to get out in front and define the objectives for their courses (or groups of courses) and assess them honestly -- independently of the grades given in the class.

That is what Kelly in Kansas was talking about (and what I was talking about in my main comment).
 
Ivory, I've seen where faculty can be strangled by narrow requirements of what to cover and how to do it, across the curriculum. It's very difficult to scale up assessment beyond institutional to regional, state/provincial and national levels and not have it tend toward the most easily measurable scope. And then, all of a sudden, we're on the hook for not agreeing on the five most important outcomes of Confederation or what key elements of geomorphology to cover at a second-year level.

Some subjects lend themselves to a certain level of "this is what you need to achieve to have completed this subject at the undergraduate level." P-chem, say, or optical mineralogy. Even then: if you have a professor can leverage her expertise to teach things differently: it's important to leave sufficient wiggle room that they can do this and not leave students to be caught up in an assessment minefield.

CCPhysicist, that's true that we need to be involved. As a historian, my problem is that I run into people who alternately want to dismiss my entire subject area from the curriculum (being that I mostly teach pre-modern history and all non-North American) or reduce it to a list of facts and politically-idealized products.

I find some of these schemes particularly frustrating because there are so many different ways we can "mix" our courses, too. When I have over two thousand years of history in my syllabus (as I do in my Ancient Near Eastern survey), I can't cover everything! Even two hundred and fifty years is a daunting quagmire in its totality. I can pick and choose elements that work with interesting texts and documents, that will pique the interest of students (we're in a mining town, for instance, and I've used that to hook them on some aspects of history). Another prof, covering the same subject, might handle things wildly differently. Who am I to tell him or her what's absolutely vital for teaching this topic?
 
Ivory had the absolutely correct answer. My PhD concentration was curriculum design and my post-hoc design dissertation concerned four variables affecting standardized tests. Of funding per student, test/text alignment, SES, minority membership, only SES was statistically significantly associated with achievement.

My reading the literature (available 15 years ago) and becoming familiar with theories about student learning (some of it decades old which is still relevant today) supports the plan Ivory posits.

We know how to find out what students learn, but such testing can't be done in 3 hours with a standardized test at the end of a course of study. As Ivory states, it must also include portfolios and attitude changes.

Curriculum design professors know how to do it. Ask them.
 
As far as persuading taxpayers, you might actually be better off with individual student success stories rather than statistics. I say this because there is some social psychology data on this kind of thing. There are definitely individuals who want stats, but if you want to get people to e.g. donate to a cause, you are better off at least having the personal stories. There's a reason those 'save the children' type 4am infomercials have you save a specific child.

But that reveals a fundamental cynicism I have- I believe the public should be skeptical. I also believe the public will be credulous, and that people, on average, are not numerically oriented enough for stats to trump people stories. So you can have a college that delivers very little to the average student, and convince people it's the best thing since sliced harvard, as long as you've got an 'up by his bootstraps' American Dream Made! success story or two to throw at the wolves.
So, sadly enough, there are two questions...
1) how do we prove, to 'ourselves' and/or the 'experts' that college is valuable?
and
2) how do we convince, to the general public/the taxpayers, that colleges are worth funding?

These question could overlap, but there is no particular reason to assume they have the same answers.

I don't think we do well on either right now.
Frankly, after looking into the collegiate learning assessment, I'm not too impressed with how higher ed is doing on 1).

As a side note- I am heartened by the Wash Post note on CCs, in that the 'math emporium' model is probably an idea whose time has come. Not for everyone, necessarily, and it depends on useful placement tests which may not be as simple as they seem. Still, I think there's a good idea at the core there.
Though I wonder what would be the effect of success/completion rates in remedial courses if you just condensed things down further. I think making them harder could actually get you better results.
 
Dean Dad writes, "Part of the reason that Academically Adrift has resonated as much as it has, I suspect, is that it argues something that most of us (and most of the taxpaying public) secretly know to be true: many college students skate through without getting appreciably smarter."

I've said it before, and I'll say it again: All this presupposes some sort of Golden Age when all college graduates could x, y, and z.

So when were the Good Old Days, and how do we know?

--Philip
 
All this presupposes some sort of Golden Age when all college graduates could x, y, and z.

So when were the Good Old Days, and how do we know?


When college students could plausibly pay for tuition and books with in-college jobs, and graduating from college guaranteed a position in the elite.

SamChevre
 
If a student thinks that college consists of jumping through hoops and getting a piece of paper, then yes, it will be a scam. In addition, as we continue to push people who are not good fits for college into higher education, we make failure more and more likely.

One of the challenging aspects of living in the US right now is that there is so much unchallenged stupidity that it's difficult to know where to start.
 
Janice @11:49 -
That is exactly what I am thinking about, although more because I don't want to constrain my final exam too much rather than because the content itself varies from year to year.

My suggestion is to keep your outcomes vague enough that they encompass a range of specific topics, each spelled out with well-defined ways of assessing each of them. What we have found energizing is that no ONE person is dictating what that list of outcomes will be. Rather a group of colleagues that teach (or might teach) a particular course discuss what is important and how to assess whether students are getting that idea.
 
Isn't the Washington Post primarily in the education business through its rather profitable Kaplan subsidiary? Of course, Kaplan has lately been notable for its students failing the test of being able to get a job that lets them pay back their government backed loans.
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?