Thursday, April 24, 2014


Friday Fragments

The Girl asked earlier this week why there isn’t a metric system for time.  We have sixty seconds in a minute, sixty minutes in an hour, and twenty-four hours in a day.  That makes math unnecessarily clunky.  Why not divide the day into tenths, hundredths, and thousandths?

I didn’t have a good answer for that.


Rebecca Schuman’s latest in Slate is well worth the read.  She launches what starts out as a fairly standard attack on student evaluations of professors, but she gives it a welcome twist.  Concur in part, dissent in part.

In brief, student evaluations have been shown to be affected by factors beyond what most of us would consider teaching quality.  One that I’ve seen has been adherence to gender norms.  Male professors are rewarded for “alpha” behavior, and female professors are rewarded for “nurturing” behavior; people who cross types get punished.  And there’s always the question of grading standards.  

To her credit, though, Schuman notes that many of the obvious or popular alternatives are also untenable.  Peer evaluations are notoriously subject to internal political pressures.  Administrative observations are usually one-time snapshots. and only as good as the administrator in question.  Performance in followup courses conflates the performances of two classes, and presumes the existence of followup courses.  (Many gen ed classes are standalones.)  And just giving up altogether and assuming that everybody is practically perfect in every way fails the “basic plausibility” test.  Even if you assume that most faculty are at least pretty good -- in my observation, that’s true -- it still doesn’t follow that every single one is.

Schuman suggests requiring students to put their names on their evaluations.  (The evals would be hidden from view until grades had been submitted.)  The idea is to discourage what on the internet we call trolling.

I agree that names would probably get around some of the most inappropriate and offensive comments, and I’d personally be fine with trying it.  (Our evaluation forms are collectively bargained, so this isn’t just my call.)  But to me, the issue is less how the forms are written than how they are read.  This is where administrators need to know what to look for, whether or not the forms are signed.

Let’s say you have a scale from 1 to 7, with 7 being the best.  Most scores will cluster from about 4 to about 6.5.  Except when mandated by contract to notice, I pay those little mind (and variations in the range even less).  I look for the 1’s and 2’s.  When the same names appear in that range repeatedly, that’s a red flag.  That’s where I’m likely to dig deeper.  Yes, there may be variations of a couple tenths of a point by gender, but that doesn’t explain why Ashley has a 1.5 and Sophia has a 6 semester after semester.

Comments are harder to systematize, but there, too, a savvy reader knows what to look for.  Most are vague or irrelevant.  (In my t.a. days, a student wrote that I was the only t.a. she had whose accent she could understand.  Uh, thanks?)  A few complaints along the lines of “too hard!” or “we have other classes, too!” are to be expected.  The comments that raise eyebrows for me are along the lines of “prof misses a lot of classes” (when most students say it) or “prof takes two months to grade papers.”  Over the years, I’ve seen both.

In other words, it’s about using evaluations to spot negative outliers.  And even there, it’s only a first indicator.  After a decade-plus of administration, I can tell you with some confidence that students’ collective identification of outliers tends to be pretty accurate.  Variations in the middle should be ignored.

If the only evaluations you ever see are your own, it’s easy to overestimate the impact of any given comment or score.  But when you see the vast sweep of them, you realize that most of the randomness comes out in the wash.  So sure, go ahead and lobby for names on them.  I think you’ll be surprised at how little difference it actually makes.


Sometimes the right metaphor is staring you in the face.

Last night I had to pick up The Boy from baseball practice, which ran weirdly long.  While waiting,  I combed through highlights of the day’s Twitter feed, and came across Libby Nelson’s tweet asking why colleges still have print newspapers.  Print journalism is not exactly a thriving industry.  I had responded with a similar question about college radio stations.  I absolutely loved my time at my college station, but in 2014 it’s difficult to maintain with a straight face that radio is a growth industry.

Then I looked up and saw The Boy and his friends practicing their fielding, and the answer came to me.

TB isn’t playing baseball to prepare for a career as a baseball player.  With my chromosomes, the chances of his coordination being good enough to hit a major league curveball are basically zero.  But that’s okay; that’s not why he plays.  He plays because it’s fun, and because it’s a great way to spend time with friends and build skills and confidence.  

Looking back on college radio, I’d make similar claims there.  I never worked in radio professionally, and have no plans to.  But college radio was great fun, I learned a lot about organizations, and it gave me practice communicating with the public.  It was a great liberal arts experience.

With newspapers, the claim is even easier.  The particular form may be dying, but the ability to corral disparate facts into a readable narrative still matters.  And newspaper staffs bond like radio staffs or baseball teams.  

I’d love to see similar activities that also have brighter futures of their own start to catch on.  But in the meantime, let’s keep those papers coming.


Fifteen years ago this week, TW and I got married.  

Looking back, we were pretty clueless, but I don’t know how we couldn’t have been.  That’s just how it works.  

Now we have an almost-teenager (next month!), a nine-year-old, and a dog.  The house is a lot fuller than that first condo was.  We’ve already outlasted my parents’ marriage.

I’m not sure how to express fifteen years in metric, so I’ll just say, I love you, honey.

Congratulations on 15 years of marriage!

You can tell TG about the French Revolutionary calendar( not only did it provide a metric day -- 10 hours of 100 minutes with 100 seconds each -- but it renamed the months, and replaced weeks with 10 day "Decades", with the 10th day as a day of rest. It might have caught on had Napoleon not abolished it after about 15 years.
Happy Anniversary! And for your information, 15 is "F" in the hexadecimal number system.

The metric system doesn't really have anything except seconds, and once you decide on that you are stuck with the structure of an 86400 second day because a "day" is tied to astronomical events. (The French system required changing the length of a second to fit their scheme.) And astrophysics doesn't even have a single "year" or "day". Check out siderial and tropical years!

The liberal arts education in my past forces me to tell you that all of those 60s are a relic of the Sumerian base 60 number system that became the basis for everything involving angles (which includes time) when trigonometry and astronomical navigation got invented in that region. 60 and 12 are really convenient because they are exactly divisible by so many factors (2,3,4,5,6) that you can do a lot of things without fractions. I don't know if the French calendar with 360 regular days and 5 (or 6) "extra" days goes back to those early ideas, but 360 is a much nicer number than 365. It might have caught on if they had had bowl games on those 5 or 6 extra days.

PS - Any serious geek should know that all time is measured from January 1, 1970 in Greenwich. (Unix time.)
I've always wanted a system that combined student evaluations with peer evaluations. My dream: a peer visits with to talk about my course and teaching goals. Then, s/he visits my course at least twice to observe my teaching. Finally, and this is the key, that same peer guides my students through an evaluation process, helping them think about how effective I actually was in meeting the course goals, finding out whether I was actually available to students, how helpful grading was, etc. I think a big part of the evaluation process is that the students don't really know what they are evaluating. As Schuman points out, they just aren't very good at evaluating their own learning. But, being a professor, I believe these skills could be taught. And with a peer to remind them what my goals were and get from their feedback on the means I used to reach those goals, we might actually end up with something that was useful to me and worth a look from the department.

But, like Schuman, I know this won't happen. The time commitment alone would be huge and no university is likely to pay for that extra load.

If we go with non-anonymous evaluations, I'd love to see them correlated with grades. I'll bet many of those outliers will correlate with low scores.
But to me, the issue is less how the forms are written than how they are read. This is where administrators need to know what to look for, whether or not the forms are signed.
How they are written does matter. You can mitigate to some extent bias based on you design and use the instrument.

And yes you need to know what you are looking for but that requires training. Experience is great but bias training when looking at evaluations is vital by everyone who looks at them.

Also, narratives should be used for assessment purposes by the instructor not for evaluation.

To properly read them requires understanding the course itself. It also means knowing how to code narrative statements. It is rare anyone can do both since they are typically either not content experts or not trained to properly code.

Great post, great comments, another great week for your blog. Thank you!
Congratulations on the first 15 years. (My first one didn't make it that long, and the current one is not there yet.)

On evals...I'd second Anon at 5:42 AM. The structure and content of the eval matters. Where I was before retirement, we had an elaborate set of goals for our courses, but never asked about then on the evals. I kept asking about that, suggesting that we really did need to ask. For example, one of our major objectives was to enable students to work well in teams/groups. We did not ask to what extern team/group work was used in class or anything about the effectiveness of it. It can be hard to make up for stupidly developed and structured evaluation instruments.
In re: time, I always consider it a result of the fact that there are meaningful external references for time; one rotation of earch around its axis, one rotation of moon around earth, one rotation of earth around the sun. These are given, and effective determine the split of one year into months and days. Hours are up for grabs though, we could change those if we wanted, but it would never been orderly like the metric system.
Good point, Anonymous@2:05PM, and seasons must be the reason the number of months is divisible by 4 even though the months are not tied to a solstice or equinox.

And we did change the hours, creating standard time to bring more order to the system. We put exactly the same number of seconds into one "day" even though the time between one noon and the next one varies significantly over the year.
I like anon at 10:46's plan, but agree with him or her that it's unlikely to be put into place. I also like Dean Dad' suggestion on how better to read evals.

I'm glad I read Dean Dad's musings on the Schuman piece before I read the actual piece. If I had read her piece before, I would have been put off by the tone, especially the implicit claim that she's a very good teacher and criticisms of her teaching must be based on something other than the quality of her teaching. She didn't actually say that, but it seems very much in between the lines, and to the extent that I'm right and not being too sensitive, the tone feeds into the narrative that academics just don't like criticism.

....which isn't to say she doesn't bring up good points, just that her tone is off-putting.
I'm baffled at the hate for college radio stations and newspapers, too, for precisely the same reasons you put forward -- is the expectation that everyone who plays on the Ultimate Frisbee team going to be getting a job in the enormous professional Ultimate Frisbee league?

Or, maybe, is the idea that skills transfer between disciplines, people should try new things, etc?

People are weird. We want educated people without providing educations.

We only consider base 10 to be normal and easy because we grow up with it almost from birth and start practicing it at age 3-4. A base 12 system would actually be preferable in many respects with 1/4, 1/3, and 1/2 all being even multiples of the base. "Moving the exponent point" would work exactly the same as with a decimal system.
I agree with CCPhysicist in that a "day" is tied to astronomical (and earth) events. NASA reckoned the day shortened with the Japanese earthquake

I guess if you try and fit everything nicely into a day then the next big earthquake might change the fit.

I read this book to my daughter and I really enjoyed it - "The Terrible Truth about Time"
In it here's a short history of John Harrison and the Longitude Prize.

A version of that story was dramatized in the series "Longitude"
Which was really fascinating for the science and the politics of science. But that's more for adults - sailors get shown being hung for keeping a reckoning of where they are.
Check out "Swatch Internet Time," an attempt to rationalize time (and timezones.) Much like the French Revolutionary Calendar, it failed miserably.
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?