Three weeks of blog silence are over. The semester kicked off; I was pretty busy. Well, I am pretty busy. Hence, I didn’t manage to write anything further about the upcoming elections in Italy (nothing has changed really), and I didn’t manage yet to draft any post on my cross-country skiing explorations in Südtirol. Since it’s already February, maybe I keep these for next season, but I’ll update my opinion on the GPS accuracy of the Garmin FR 935 as compared to a Smartphone. All of that is distracting information from what I really want to write about this time around: my students.
The context: Students won’t read proactively by themselves
Last fall, I announced that I would now really ditch the textbook in one of my marketing research classes. Instead, I wanted to provide them with a curated list of readings. They are third-year bachelor students; the majority is on exchange from the United States. I faced two challenges: First, am I selecting good readings? And, second, do students actually read?
The second problem is one that’s not idiosyncratic to text-book free classes. I mean, I’m only 33, and I can still remember my time as a student. I did prepare readings before lectures and I felt I benefited of it. I know that most of my peers did not. The majority reads the materials after the lectures, and a large share attempts some bunch reading as the exams get closer. If I have to expect that students read materials only after the lectures, then my classes have to be much more basic: I’m their first exposure to new concepts. I can’t discuss them in much depth; I have to explain them. Essentially, I end up narrating the book. This is about the least satisfying way of teaching.
My preferred way of teaching is an open conversation. I guess that in order to get there, I’d have to ditch not only the textbook, but also the slides. I haven’t done so yet. I’ve included tasty slides at least.
At any rate, I want to move to a scenario where I can challenge students with questions that allow them to critically reflect on the basic concepts. I want them to get the sense of concepts. I want to be able to show them things beyond definitions and explanations, because their basic understanding from their readings allows them to follow me on uncharted terrain.
And, sure, highly motivated students will read before lectures even without any incentive. At least most of the times. Or as long as its relevant to them. Fine. Maybe I don’t need to care. Yet we do care about less motivated students, too, no? I’d be happy to infuse them with just a little more motivation to learn about what I have on agenda.
One solution: nudge them (somewhat)
A nudge is a behavioral intervention which does not change the incentive structure. Students should not get a direct reward for behaving in a superior way. In that sense, the solution that I implemented is not a nudge at the beginning of the semester, but it becomes a nudge in the second half. Let me explain.
At Bocconi, students can be attending or non-attending. In my course, attending students have to submit at least four and up to eight assignments. (The semester has ten weeks.) The best four assignments will determine their grade. However, in order to qualify as attending students, they also need to give me feedback on their weekly readings at least four times. Any additional feedback they give me beyond that is voluntarily. While their feedback is never graded, the first four times they give feedback is not a nudge, because it’s a necessary requirement to qualify as an attending student. The reminder that they may continue to give feedback is a nudge – one that’s based on the idea that they will continue giving feedback once I’ve conditioned them to do so.
Their feedback is very simple. They have to respond to only two questions. My minimum requirement is for them to type at least 50 words. This paragraph has about 50 words. It’s really not that much. At most I avoid that students write: “I liked reading x.” And nothing more.
Question 1: What did you find most interesting about the readings?
Question 2: What did you find least clear about the readings?
This feedback is an adjustment to more conventional reading quizzes. In my other course, I use a textbook, and I ask them five simple questions before each lecture to make sure they have at least browsed through the book. That also worked (for me). I know from last year: Even then, some students kept answering quizzes beyond the minimum number they had to submit. I could have used the same method. They would read.
I wanted more. My hope was that this feedback survey would make them think about the readings. And, somewhat, enjoy reading.
Does it work?
I’m very happy with the outcome. For instance, I feel that attendance rates have gone up as compared with previous years. I’m supposed to have 34 students. Around 25 come to classes. Some students have an overlap for our second weekly session. If they were to come, they’d only be able to get 50 minutes of input – instead of 150 minutes. At 6 in the evening. They come.
There might be other explanations for that. However, I do have their actual feedback. I massively enjoy reading it. The following quotes are feedback that they gave in the second week. The readings covered focus groups and coding of qualitative data (week 2). It’s only seven of them. All those that I don’t include look very similar to this. I formatted some key sentences in bold. They underline how deeply engaged my students were with the readings.
On question 1: What I found most interesting in these readings is the clear and complete scenario that they give about focus groups. Beginning with the story of how this method has born, the simplified but accurate explanation on how to be prepared to them and, of course, also a digression on which are the main critics, I think I now have a wide view on how they work, even if just theoretical.
On question 2: What I found least clear about the reading was that even when there are so many ways of coding and analyzing the data so that you get the best result out of a focus group investigation, it is still very risky to follow this results and making decisions based on them when they are not very precise. I understood how important it is to do more that one focus group for getting more convincingly results, however I still think the limitations this method of research has could make you make the wrong decision, thus it is not clear to me how is it that so many companies still use this method for customer analysis.
On question 2: Obviously, Anderson’s piece both surprised me and challenged me, so understanding his point of view is definitely something I plan to bring up in class. In addition to that, I felt that Saldana’s piece explained coding very well, but it didn’t give enough insight into how much coding is sufficient to make a claim. When I interned at a marketing firm, I helped with sentiment coding for GlaxoSmithKline, and it took 6 of us 4 and a half weeks to get through all of the content – thousands of tweets, articles, and really anything that came up on the internet and was pulled by our coding software, Quid. While the “Preparing for focus groups” YouTube stated that you needed to conduct enough focus groups to achieve “saturation” in order to make a claim, I wonder what the bar is set for in regards to coding. Secondly, on the note of focus groups, while the YouTube video explained how to plan and conduct a focus group, I felt that it left out one large part of planning: selecting your participants. While the narrator mentioned “focused diversity” in regards to the key population you interview, it doesn’t talk about how to choose those participants or recruit them. I would love to understand more about how to recruit participants, because I feel that oftentimes recruiting for a focus group can encompass both implicit and explicit bias that may skew the results of the group.
On question 1: I had always thought of focus groups as a very important and commanding source of information. I thought that it was one of the most important pieces to testing the viability of a concept in market. However after reading the assigned readings I was surprised how arbitrary and how easy it is to devalue info from focus groups. For example I was intrigued by all the recommendations that the video how to prepare for focus group gave and can see how leading a group poorly could increases observer bias and discredit the information. I’m very excited to learn more about this topic!
On question 1: I thought that the Fessler 2017 article was the best article because I did now knot about this – that you can transcribe a interview for example straight to Google Docs instead of working hours doing it – just genius, and something I am going to try out! I thought the Anderson 2014 article was also very interesting and had some fun examples and the author had a big opinion about the matter which was fun to read. I never thought that focus groups could have such a negative impact on films and television programs, but it makes sense now after having read the article.
On question 1: I found Anderson’s piece “F**k Focus Groups” the most interesting and surprising of this week’s readings, especially as it was the first source on Blackboard regarding focus groups. I understand his point that focus groups won’t work for kids’ products or tv shows all the time, but it surprised me that he went so far as to say that “Using a focus group to help plan a creative project is disastrous” and that “Focus groups don’t help make a winning product.” I believe that these claims are an overstatement of the problem of focus groups, because in my experience conducting focus groups for Coca Cola while working at an American marketing firm two summers ago, we were able to take qualitative data from focus groups and put it into quantifiable data points for executives to get a picture of the products. Saldana’s article got at this point when she talked about coding, which gave me a sort of biased view when reading Anderson’s piece. In the same vein, the fact that focus groups came from World War II propaganda research was extremely surprising, as I can’t imagine advertising or marketing before focus groups. That being said, it surprises me that Anderson can make such a claim that focus groups aren’t useful, because I feel that they are one of the best ways to get real opinions on a product or idea. It sounds nerdy, but I would love to talk with the author and ask him what he suggests in lieu of focus groups, as he spends his articles saying how stifling they are to success, but he doesn’t suggest another way to extract qualitative data from the target audience.
It’s amazing. On average, students wrote 96 words on each question. That’s almost twice as much as what I demanded. A few answers were shorter than 50 words; the shortest answer was 25 words. The longest answer was 287 words.
At this point, I almost regret that there is no actual grade for them to earn. Some of these answers are better than what I usually see in exams.
Their feedback on question 2 (what they found least clear) helps me to select better readings for the upcoming sessions, as I can figure out what they struggle with. To that end, the feedback surveys are an amazing addition to my teaching. Remember the two problems I mentioned earlier? I actually can address both of them with the very same and very simple tool.
What does not work?
My lectures take place on Mondays and Tuesdays. They have to submit their reading quizzes by Sunday evening. I don’t have a chance to incorporate their questions into immediately into lectures on Mondays. This is a huge pity. I’m afraid they might feel that we don’t really care about their input. But we do.
Some students include very interesting and relevant questions. Timing doesn’t make it easy to respond to that either. On Sunday at Midnight, I can’t really focus on that anymore. Then, my agenda does not allow me to look at it until after the lectures took place and they moved on to the readings for the week thereafter. Since my TA doesn’t know the readings either, she can’t take over this course for me. Maybe next year…
Of course, it’s early in the semester. I’m really curious how this keeps going. At the very end, I’ll post an update to this article to re-assess how it worked. I’ll present some descriptive statistics about how the responses evolved, and I touch upon the most recurring patterns that I find excite but also over-challenge my students. So… stay tuned.
This is but one way
Of course, I’m not the one to find the holy grail. Strategies have been developed in the past. Some are more patronizing and based on incentives (quizzes) and some are more lenient and based on better teaching. Some require more and some less effort. John Warner points out that using readings in class is a key element to having students reading them. Again, the huge part of irrelevant text in the textbooks I saw was a key motivator for me to ditch the textbook. He also indicates that maybe it’s not even only about them not wanting to read, but also about them not being able to read for structural reasons in our curriculum.
So, these feedback surveys are but one way. It’s a simple and cost-efficient way. And it has one huge benefit for me: I feel more connected to all of them, because I can value and appreciate their effort. And for that effort, I’m very thankful.