How do you keep 100 students awake on a Friday afternoon? Fast feedback and iterative adaptation seem to work

February 4, 2015

     By Duncan Green     

I wrote this post for the LSE’s International Development Department blog
sleeping students

There’s a character in a Moliere play who is surprised and delighted to learn that he has been speaking prose all his life without knowing it. I thought of him a couple of weeks into my new role as a part-time Professor in Practice in LSE’s International Development Department, when I realized I had been using ‘iterative adaptation’ to work out how best to keep 100+ Masters students awake and engaged for two hours last thing on a Friday afternoon.

The module is called ‘Research Themes in International Development’, a pretty vague topic which appears to be designed to allow lecturers to bang on about their research interests. I kicked off with a discussion on the nature and dilemmas of international NGOs, as I’m just writing a paper on that, then moved on to introduce some of the big themes of a forthcoming book on ‘How Change Happens’.

As an NGO type, I am committed to all things participatory, so ended lecture one getting the students to vote for their preferred guest speakers (no I’m not publishing the results). In order to find out how the lectures were going, I also introduced a weekly feedback form on the LSE intranet (thanks to LSE’s Lucy Pickles for sorting that out), and asked students to fill it in at the end of the session. The only incentive I could think of was to promise a satirical video (example below) if they stayed long enough to fill it in before rushing out the door – it seemed to work. The students were asked to rank presentation and content separately on a scale from ‘awful’ to ‘brilliant’, and then offer suggestions for improvements.

[youtube height=”HEIGHT” width=”WIDTH”]https://www.youtube.com/watch?v=xbqA6o8_WC0[/youtube]

It’s anonymous, and not rigorous of course (self-selecting sample, disgruntled students likely to drop out in subsequent weeks etc), but it has been incredibly useful, especially the open-ended box for suggestions, which have been crammed full with useful content. The first week’s comments broadly asked for more participation, so week two included lots of breakout group discussions. The feedback then said, ‘we like the discussion, but all the unpacking after the groups where you ask what people were talking about eats up time, and anyway, we couldn’t hear half of it’, and asked for more rigour, so week three had more references to the literature, and 3 short discussion groups with minimal feedback – it felt odd, but seemed to work.

At this point, the penny dropped – I was putting into practice some of the messages of my week two lecture on how to work in complex systems, namely fast feedback loops that enable you to experiment, fail, tweak and try again in a repeat cycle until something reasonably successful emerges through trial and error. One example of failing faster – I tried out LSE’s online polling system, but found it was too slow (getting everyone to go online on their mobiles and then vote on a series of multiple choice questions) but also not as energising as getting people to vote analogue style (i.e. raising their hands). The important thing is getting weekly feedback and responding to it, rather than waiting til the end of term (by which time it will be too late).

LSE content weeks 1-3The form is not the only feedback system of course. As any teacher knows, talking to a roomful of people inevitably involves pretty intense realtime feedback too – you feel the energy rise and fall, see people glazing over or getting interested etc. What’s interesting is being able to triangulate between what I thought was happening in the room/students’ heads, and what they subsequently said. Broad agreement, but the feedback suggested their engagement was reassuringly consistent (see bar chart on content), whereas my perceptions seem to amplify it all into big peaks and troughs – what I thought was a disastrous second half of lecture two appears to have just been a bit below par for a small number of students.

The feedback also helps crystallize half-formed thoughts of your own. For example, several complained about the disruption of students leaving in the middle of the lecture, something I also had found rather unnerving. So I suggested that if people did need to leave early (it’s last thing on Friday after all), they should do so during the group discussions – much better.

What’s been striking is the look of mild alarm in the eyes of some of my LSE faculty colleagues, who warned against too much populist kowtowing to student demands. That’s certainly not how it’s felt so far. Here’s a typical comment ‘I think that this lecture on the role of the state tried to take on too much. This is an area that we have discussed extensively. I think it would have been more useful to focus on a particular aspect, perhaps failed and conflict-affected states since you argue that those are the future of aid’. Not a plea for more funny videos (though there have been a few of those), but a reminder to check for duplication with other modules, and a useful guide to improving next year’s lectures.

What is also emerging, again in a pleasingly unintended way, is a sense that we are designing this course together and the students seem to appreciate that  (I refuse to use the awful word co-create. Doh.) Matt Andrews calls this process ‘Problem Driven Iterative Adaptation’ – I would love to hear from other lecturers on more ways to use fast feedback to sharpen up teaching practices.

And now of course it’s over to the students themselves to say what’s really been going on…..

P.S. Nice distinction made by LSE’s Jean-Paul Faguet in a discussion of this post. End of term evaluation is best for getting feedback on content, as by then students have a picture of the whole course, and the bright sparks can feed back on what was missing. Realtime feedback is most use for adapting the format/presentation as you go along.

Comments