Tuesday, June 14, 2016
Aggregating Course Evaluations
My research question aligns with what I identify as a potential barrier to these investigations: What characteristics of students are associated with completing course evaluations? For example: Does really liking or disliking the professor make a student more likely to complete the survey? Are students who earn A's more likely to complete the survey? Is higher survey completion associated with registering for the course earlier rather than later?
As these question might hint at, we have tremendous difficulty getting students to actually do the course evaluations. When we switched from paper to online surveys, the completion rate dropped by an enormous margin. We too have piloted course evaluations that students can complete on their phones, but I've had difficulty with these. Too many students have technological difficulties, somehow don't know their student ID number, or say they'd just rather do it at home (and then skip it all together). It's also awkward to troubleshoot for those students who ask for help logging in or want to verify that they're responses were submitted when I don't know that it's ethical for me to even be standing in the room. All good research relies on good data -- so how do we get enough students, and a fairly representative sample, to complete the course evaluation?
I'll note that I do know of one creative approach. My grad school gave a big incentive to complete the surveys: before the beginning of the next semester, those students who submitted all their evaluations were given access to a database that allowed you to look up last semester's evaluation results by course and instructor. The idea was that by contributing to the data you got the privilege of using it to your advantage in deciding which courses/sections to register for. Has anyone else seen something like this?
Note: paper forms had an 85% return rate, while our new, improved online forms have only a 15% return rate. Going to online forms also resulted in almost no written comments, which were the most useful part of the old paper forms. Going online has resulted in less data and lower-quality data, so don't expect miracles of "big data" just because the data are now numeric and searchable.
I like CC Bio Prof's first suggestion. It is terrible that Institutional Research can't correlate grade with evaluation. (Or attendance. With on-line evaluations, a student can still evaluate a course despite not attending class for weeks or even months and maybe skipping the final exam.) There is a lot to learn there.
And I like your idea of evaluating the course rather than the instructor. I already do that myself, but can't see across sections taught by others. But we did get some interesting info by having IR looking at course completion correlated with when they registered. You do, however, have to be careful with multivariate data. Don't ask for too much until you can refine the question(s).
Although my own course feedback data isn't as rich as I would like, it (along with more anecdotal feedback from students who drop by to visit after transferring) has been extremely useful in working on the course from year to year. What is frustrating is that I know the computer systems can also look at how my specific students do after transfer, but that is not where IR can spend its limited resources (because that particular task requires the highest level of expertise). We've only seen aggregated data of what happens after transfer, but what we have seen is really interesting.
I get more written feedback now than I used to, but that was a flaw in our paper forms. Students had to use their own paper to write comments so it never looked to them like it was part of the evaluation. Or they just wanted to be done with it.
I am not sure why the "on line" response is necessary for making more Big Data questions possible to ask. Our paper evaluations have been "fill in the bubbles" for at least 15 years. Won't answers on paper evaluations populate a database (eventually...) in the same way that on-line answers populate a database more quickly?
So, student A gave no profs 5/5, student Bgavr all profs 5/5 etc.. It's a way of Mormong scored.
Digital marketing company in Chennai
In principle, an on-line survey can capture who is doing the evaluation and supply that information to institutional research for detailed analysis. (You know, the way Google reads your mail.) You can never do that with paper evaluations. Doing so would require disclosure, of course, but if the college does it like Google does it, students will never know they approved that research.
I would second (third? fourth?) the observation that the number of completed evaluations drops dramatically with the shift from paper to on-line. In my current life as an adjunct, the response rate in my intro econ class in the spring of 2015 was 90% (27 out of 30); the shift to on-line occurred in fall 2015, and my spring 2016 response rate was 13% (4 out of 30).
It also makes me wonder if the tracking aspect is related to the low response rate. I suspect it isn't, given the strong love of Google email.
Our system, which is run by a private company, does know who has completed the survey. (I do not know if they tie the responses to the student, which is the critical detail for doing any further analytics.) However, students say they don't get any followup requests to do the survey, so the college doesn't use that info to improve response rates.
We have anecdotes suggesting the falloff is simply due to distractions. A few faculty got great response rates by having students do it in class (if their device is compatible with the system being used). They got great response rates, but also said they discovered the survey took a long time to complete. We asked for, but did not get, information about the fraction that start it and then quit partway through. That could also be a factor.
As a whole, though, I'm not a fan. There are 40-something questions, many of which are not relevant to many courses. Even the well-meaning students get discouraged answering far too many, and largely irrelevant questions. This is particularly true for science majors, who take more courses (a 1 credit hour lab takes just as long to evaluate as a 3-4 credit hour lecture). As a result, it requires a huge effort to get any sort of meaningful response rate, and I've never been able to get as good response rates as I used to get with the 16 question plus comments paper form.