Life in academia

Evaluations are in. Was it worth the effort?

To teach or not to teach, that is the questions. This is about numbers, in some way. We went beyond that.

The spring semester is just two weeks away. We are already preparing the next courses. Just today, however, I received the student evaluations from the marketing research course I was teaching last fall. It was the one in which we run this Instagram experiment and then later also experimented with peer-grading.

My co-teacher and I had been teaching the same course also last spring. It was then moved in the curriculum, so that we taught back-to-back over two semesters. In our first attempt, the course didn’t score bad, but there was also room for improvement. The qualitative feedback indicated that students had difficulties in finding their way through the group project. They also were overwhelmed by the level that we required them to master. From our perspective, that had a lot to do with expectation management and communications from our side.

So, how did it work out in our second run?

The school changed the way how its reporting evaluation outcomes in the mean-time. Until last academic year, we worked with averages. As of this academic year, we are focusing on medians. Since evaluation distributions are usually clustered around positive values of the scale, this effectively reduces the impact of singular negative evaluations. It also reduces differences between courses, because medians on a discrete scale from 1 to 10 will never reach 7.53, for instance, but only 7 or 8. For courses as a whole, the new reports also include the median value from the last academic year. For teachers, they don’t. Hence, I’m going to compare my personal teaching performance based on averages, while I take the medians for the course.

We made it difficult by making it easy

We had understood that last year we maybe wanted a little bit too much. We definitely scaled back. One difficulty last year, for instance, had been for the students to define their own research topics. This time, we gave them clear topics to help them overcome this initial struggle. We also taught the methodological stuff much less mathematical and we made sure they wouldn’t get lost in statistical software they had no clue about. In other words: This year, we assumed they would not have prior knowledge.

Here’s one question of the evaluation survey.

Q. Your prior knowledge were sufficient for understanding the topics covered in the course and included in the exam program.

Last academic year, we reached 8/10 (median). This time, we scored 6/10. This was our single worst item and also the largest loss in comparison to last time.

I have an idea what went wrong here. As the students approached the end of their group project, they were supposed to analyze their data with a simple ANOVA. Most of them, that is. However, this was about the only method that we did not explicitly cover in the lessons. I tried to fix it by providing them with easy-to-understand tutorials, but it’s an omission. As a matter of fact, students expressed that we need to connect better both elements of the course (the group project on survey research and the sessions on quantitative analysis).

The other determinant of that negative evaluation will have been the exam preparation. Last academic year, we used the excuse that this was our first time running the course in order not to give much information. From our perspective, full sample exams were not necessary to prepare for the exam. We might be wrong on this one. This time, we didn’t have that excuse.

Q. The exam rules and procedures have been explained clearly

We went from 8/10 to 7/10 on this one.

Fine. But there is also this one.

Q. The workload required for this course is appropriate compared to the number of credits allocated to it.

On this one, we improved from 7/10 to 8/10, congruent with our intentions. We can build on that improvement. We also can look satisfied at the following two items.

Q. Overall I am satisfied with the course attended

Q. On the whole I am satisfied about the way the
course has been carried out.

On both, we went from 7/10 to 8/10, and we closed in on the school median. Note that marketing research, like any methodological/statistical course, is typically a course that students are not very excited to follow. (Students had to indicate their interest in the course, and it lies at 7.0/10 compared to the school’s average median across all course of 8.4/10.) Particularly, we usually have huge numbers of exchange students who take this course because it’s one of the few that they can convert at home. We are very satisfied with receiving high overall evaluations on a course that students don’t really care about prior to taking it.

No negative opinions anymore, but a bit less stellar opinions as well

The survey also includes five items in which students assess each teacher separately. I’m not going to report scores about my colleague, but I can disclose my own ones. Let me first report the five questions and my average values from last academic year and this one.

Q.  The teacher stimulates interest in the course subject.
(Now: 8.90 | Past: 9.05)

Q. The teacher explains the topics in a clear way.
(Now: 8.55 | Past: 9.00)

Q. The teacher is available for clarifications and explanations.
(Now: 9.07 | Past: 9.32)

Q. The teaching activities are well organized.
(Now: 8.73 | Past: 9.20)

Q. The teacher encourages the student’s active involvement.
(Now: 9.20 | Past: 8.95)

I went down on all of them but the last one, albeit I remain on a high level. I took a deeper look at the answers for these questions. It turns out that now there’s not a single item anymore where a single student would have given me less than 6/10. That in itself is unusual and it means that now all students are generally positive. However, many votes that used to be 10/10 are now 9/10.

So, I personally might have been a bit worse, but then again: the course moved from spring to fall. Evaluations are always also relative to their context, and we now have different competition. I also messed up my final session, i.e. the exam review, and unintentionally caused confusion about two topics in that one. In our teaching setup, this is the last session of the entire course. All my entire sessions come at the very beginning. Therefore, some recency effect might have been at play as well.

We also had access to qualitative evaluations, in which students provided free answers. The topics they wrote about have become much more focused. There are two issues left: (1) Students felt not to have enough prior knowledge. We need to provide more introduction and basics. (2) The main elements of the course work well independently, but students need to see more connection between them.

For the first issue, we will add a disclaimer to the syllabus and warn students ahead of the course, and then we will try to identify which sessions really challenged them most. We still do want that course to be challenging, so we will scale back the level of difficulty only so much. Other than that, we consider to adjust the sequence of sessions, so that they see both teachers more evenly distributed over the semester. It’s a risk, however, since this changes the story line of the course. We might need to drop the group project and replace it with a series of assignments. In another marketing research course that I’m teaching in the upcoming semester, we are going to do exactly that.

Was it worth the effort?

To teach or not to teach, that is the question. At face value, our improvements were rather small. There was a bit of up and a bit of down. A simple repetition of our original course maybe would not have done much different. On the other hand, we introduced some novelties that we had no experience with before. (Peer-grading, notably.) They maybe didn’t help much with the evaluations, but I do see how they helped students to improve themselves. We’ve learned to use another teaching tool: yes, that was certainly worth the effort.

But this is about numbers, in some way. Our evaluations across items now converged closer to the median: we eliminated negative outliers, but also lost some share of super-happy students. I personally think that it’s more important to fix a course first on the lower bound than to fix it on the upper bound. We want to design courses that provide value for all our students. We have achieved that.

And at least for one student (and actually more than one), we also went beyond that.

I love this course! Both teachers are amazing, you can tell they prepare their class, that they know a lot about the topic and they simplify it very good. Both are so enthusiastic about the topics that I come very happy to class. All topics are very interesting and I love how they are presented during the semester. I like that there  are practical exercises. Overall, I absolutely loved this course even though statistics are hard for me. The positive attitude the teachers have and how you can tell they actually care for us and how we learn and not only about the exam, makes me really want to learn and not worry. I think I will look for job opportunities in this area!
Thank you so much


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: