This semester, I’m teaching marketing research together with a brilliant colleague of mine. It’s a course for bachelor students in their third year. Working with him has been a great source of inspiration for me, and helped me to improve my other two courses on very similar topics. Yet, we tend to disagree in our opinion on how we should treat students. Most recently, we discussed teaching evaluations.
Until last semester, students would evaluate the course in an online survey at a remote location. As of this term, it will still be an online survey, and it can still be done from remote, but has to be filled out in class if it shall form part of those calculations that determine the final score. In this context, I expressed my dissatisfaction about the instrument. Evaluations in class will waste teaching time and it will influence evaluations by the very nature of the topic of the session in which the evaluation takes place. Remote evaluations instead will also have some recency bias (and several other problems – really, so many of them), but at least less strong.
Instant evaluations
I then went on to propose more just-in-time evaluations at various points of contact with our students. Similar to those smiley-terminals that you now frequently find at airport security lines, public toilets, and what ever business is cooperating with Happy or Not or one of their competitors. I find end-of-semester evaluations limited in their extent to which they advise me to change specific aspects of the course. I’ll get a score of how the average student on average perceived my sessions. But there are 12 of them. I have some feeling for which sessions might have worked better or not, but to that end I also have some feeling for how students liked the course on average. If instead students were able to give me some quick feedback just after each session finished, I’d be able to figure out which sessions worked better and worse.
My colleague fears that this would annoy students (he has a point) and also give them to much power. His opinion is that the content of the class should reflect what we as experts deem relevant; students should not have a chance to vote out unpopular topics just because they don’t like them. I agree with that. (That said, there is always more material to teach than I possible could cover in the given time. Hence, I usually devote one session to a topic that students pick collectively.)
Just, this is not my point. Students are maybe not our clients. Yet we are providing them with a service. We create value for them – and with them. That 2011 paper of two co-authors and me argues that we deliver better value if we explicitly incorporate the service perspective of our value creation partners – what they can give, what they are aware they should give, and what they want to give – let them be on the receiving or on the supplying end. I don’t want them to tell me what I should teach them. But I want to know if HOW I teach them actually works. I have achieved nothing if they have lecture slides of all the most relevant topics but no knowledge. Teaching is only teaching when I get the message across.
Experimenting with instant feedback: group work
The logic goes both ways. This course includes some group work. Free-riding is an all-too-well-known problem, especially when groups form from members that did not know each other previously. I can leave it to the students to figure that out: My colleague’s opinion is that they are adults, and taking mature decisions is part of what they should learn in groups. I would agree if group work was less common. And I agree that most of the common measures against free-riding are way too patronizing and invasive for groups who work well. But they do it in every course. We have acknowledged that students learn better if there is more than just sessions, a book, and an exam. But, frankly, we don’t want to grade all these individual assignments (it’s not very interesting to read 100 times, or let it only be 30 times, a text on the very same issue), and group work decreases the number of pages we have to read. On the flip-side, it means that students’ grades are not necessarily reflective of their own intellectual capability, while we assume that when we admit them to master programs (or reject them) and offer them jobs (or do not). All too often, students hate group work, and I can understand it.
So, in a response to that, I’ve installed a weekly three-question survey. I ask them to rate how satisfied they are with the division of work. How satisfied they are with group leadership. How satisfied they are with the quality of work they produce. It’s a very simple diagnostic instrument that each student could fill out. We would contact groups if they scored below a certain threshold for the first time, and we would invite a group for an open conversation if it happened a second time. Participation has been high on the beginning and has flattened out now that we reach the end of the semester, simply because most groups work well. There’s one group that I had to call in for a meeting.
Of course, the survey is redundant in the sense that students could also contact me on their own and just let me know. Unfortunately, from my experience, they rarely do until it’s too late. I don’t need to care. Student-centered learning places the responsibility of learning on the students. But student-centered learning also envisions that the teacher provides a stimulating and empowering environment. My intention is to slightly nudge them with this instrument, in two ways. (1) Have them appreciate that next to division of work, group leadership is an important aspect. (2) Have a weekly reminder that they could contact me if they needed to.
Eventually, my thinking about instant evaluations is closely related to that. I want to be on top of the game. It comes with a risk. If students give me negative feedback that I’m not able to act upon, I might actually bring down my final evaluation score. But again, that’s not the point. I don’t envision a weekly evaluation of my rhetoric, or my slides, or how engaging students perceive my interaction with them. I want to understand if the specific class works. Others do it with quizzes that test student knowledge, but then it’s just another instrument to evaluate students. I want a diagnostic.
My next application: weekly feedback in a pass/fail course
Here’s a survey I’m going to test in one class during the next semester. It’s the only class I’m teaching all by myself. Typically, it has good evaluations of those who come, but class attendance tends to drop a bit over the course of the semester. This course is pass/fail only, and they pass when they submit and pass at least four (out of six) very simple group assignments. The Instagram experiment is one of them. The first week is an introduction, and the final week is an exam-review. For the ten sessions in between, I’ll use a diagnostic instrument like this (afterwards I’ll explain briefly why):
(1) How helpful were the readings when you prepared for learning this week?
(2) How do you perceive the difficulty of this week’s lecture?
(3) How much did the group assignment support your learning this week?
(4) Overall, how much time did you spend this week on this course (excluding the session)?
Imagine the typical course. There are three blocks of learning. Students read. They come to class. They prepare assignments. The first three questions reflect this narrative.
It’s the first time that in this course I’m going to ditch the text-book. I’m tired of it. As a result, there’s lots of new material to use, and while I have a certain motivation for each bit of reading that students will get, I’m not sure I’m getting it right every week. Now, imagine that the readings score low. I would not know which exact reading causes the issue, but could then investigate this with a quick follow-up. I can then decide if I want to replace that reading (if there’s a better alternative) or if I want to help students to make better use of it. They might respond to that first question while filling out a very simple reading quiz that they need to pass in order to qualify as attending students. (At Bocconi, we distinguish between attending and non-attending students. The latter typically get more difficult exams.)
With the second question I aim to capture the effectiveness of the class. There are dozens of ways to ask this question. I might change the phrasing. At the moment, I favor to ask about the difficulty, because it matches conceptually with the previous question on readings. Also, I don’t want to run the risk to make things too easy while making them less challenging. For this question, I’m searching for a technical solution. A happy-or-not terminal would be ideal. There might be an app for that.
When it comes to assignments, I’m facing a trade-off in that class. It’s pass/fail. Students are typically less motivated than for a class where the final grade really matters. I’ve found a way to make it matter for them (they can earn points for the exam with excellent submissions), and still I cannot overpower them with workload. It would just not correspond with the meaning that the course has in their program. However, assignments shall not be meaningless. They still have to add value. And as some of these assignments are rather experimental, I’d rather validate my perception. Collecting this data is easy. When they submit on Blackboard, they’ll just directly add an answer. No additional annoying survey necessary.
With the fourth question, I want to give them some self-awareness, while at the same time I have a more indirect way to assess how appropriate their workload is. Of course, self-reporting is flawed, but there’s no advantage from lying here: their survey answers won’t impact their grades. We all acknowledge that total student workload is going to excessive levels if they were to take things serious, or might have to work alongside their studies, so we should be wary of our own contribution to it. After all, we often know best how it feels.
Nothing of this dictates content for me. All of this helps me to improve the course for the next cohort but also to adjust the remainder of the course for the average performance level of the students I am teaching to. And I really don’t see how this decreases my power. If at all, it gives me more information to better exert the power I have.
Wrap-up
Student-centered learning is at the core of the Bologna reform. Student-centered learning does not mean that students have to figure out learning by themselves. It means that students become the focal point of learning activities. I cannot achieve that if I do not have feedback from them.Students are heterogeneous. I can’t live just on averages that I receive at the end of the course.
My intention is not to collect more data points that I can brag about. I expect that over the course of the semester some feedback will be bad. But I can act upon it. Or purposefully I cannot act upon it.
Interestingly, this conversation that I had with my colleague highlights that a truly student-centered mindset is not what most of us carry in academia. Most of us carry some sort of mix between a student-centered and a teacher-centered perspective. It might be one of the reasons why the Bologna reform didn’t succeed. Most of us also carry very specific ideas about student evaluations (because of how we implement them wrong all the time) which do not match with how they should be. Similar to how we should change our perspective on what student assessment means.
And let’s not overlook how just-in-time has changed manufacturing.