Tuesday, October 27, 2015
Why Good Student Course Evaluations Are So Hard to Find
Wise and worldly readers, have you seen a particularly good version of student course evaluations? Is there a reasonably elegant way to serve so many disparate purposes at once?
I do know of an institution where students who complete their course evaluations by the last day of classes get a special privilege that makes it worth it: they get access to a database where they can search composite data from previous evaluations by course and professor in enough time before the next semester that they can do a little rearranging of their schedules based on the results (which is limited, of course, because sections in prime time and with popular professors are already full).
We had a really good system, but replaced it with an on-line system that was often worthless. (Some mediocre faculty were never evaluated because the response stats were too low to be statistically significant. This was easy to achieve if the prof didn't push the response in some way, because then the only notice was yet another spam-like e-mail from the college.) The only flaw in our earlier system is they used scantron forms that did not provide an option to write comments on the back. So they rarely got any comments unless you supplied your own blank paper. More expensive forms with a directive to put comments on the back (at a different institution) generated a lot of feedback. One of its other advantages is the response rate told the boss how many people were present on the day it was done. That alone would be a course evaluation.
Regarding your point about identifying problems, you could look for effusive praise of a certain type on Rate My Professor as a warning sign. I have no idea if our regular student evaluations provided any evidence of a huge (firing offense) underlying problem, because I have never been in a position to see those ratings, but I doubt it. No survey I've ever seen has asked some of the kinds of questions they have on RMP. What showed up on RMP was a really high score for "easiness" and comments about how students were told what was going to be on the exam so you didn't even have to attend class and, most importantly, that class regularly got out a half-hour early.
If a professor wants feedback on how the class is going and what the students perceive as serving their learning or not, it's way better to ask that in, say, week 4 or 5. Do a simple "stop-start-continue" questionnaire. Or, have your friendly local teaching & learning center consultant come to observe and interview the students. Then, TELL the students what you will and won't do, and why.
For the summative purpose, if I were king, I would make an iron-clad rule that the forms have to be created in accordance with best survey-authoring practices for reliability and as much validity as possible, AND that the data have to be analyzed and interpreted in statistically defensible ways!
Beyond that, it would be nice to see patterns acted upon -- professors who have a pattern of excellent student responses get asked to share their practices; professors who have a pattern of problems get held accountable for changing.
If I -- were King -- of the Foreeeeeeesssst!
Consequently, in order to avoid negative reviews, there is pressure on faculty members to try and “game” the system, by easing up on standards, by giving simple or easy assignments, by giving lots of good grades, and by not challenging their students too much. Some faculty members even resort to handing out free pizza to their students at course evaluation time, hoping to drive up their scores.
Ideally, course evaluations should be for the benefit of the individual faculty member—they should be used to determine if what they are doing in the classroom really works well, or if something needs to be fixed or done differently. They should not be used by the administration as tools to reward or punish faculty—if they are used in this manner, this can lead to all sorts of perverse incentives.
Length and difficulty of homework assignments were:
Excellent, Very Good, Good, Fair, Poor, Very Poor.
So if the students choose Very Poor, am I giving not enough homework? Too much? Too easy? Too hard?
You can guess, but you can't be sure.
Badly designed assessments will get less-than-useful results.
And I agree with all the comments about moving the CTEs from in-class to on-line. We also tried that, but the response rate was less than 10% across all courses. So we went back to in-class.
jordan pas cher
michael kors outlet
michael kors wallet
giuseppe zanotti outlet
michael kors handbags
gucci outlet online
new balance outlet
michael kors factory outlet
michael kors handbags
canada goose uk
christian louboutin uk
michael kors handbags outlet
yeezy boost 350
louis vuitton outlet
canada goose outlet
ralph lauren polo
ray ban sunglasses
adidas nmd runner
michael kors outlet
cheap ugg boots
louis vuitton handbags
cheap nfl jerseys