Four years ago, March 2008, I was at a JISC Experts Group meeting at The Watershed Media Centre in Bristol. There was a range of interesting sessions, one I remember well was a report from Bob Rotheram, National Teaching Fellow at Leeds Metropolitan University on the Sounds Good project.
This session will give members the experience of receiving audio feedback on assessed work, learn about the Sounds Good project funded under the JISC Users and Innovations Programme, and consider the potential of this approach.
This was a really interesting project that was looking at the use of audio feedback for student assessment. With the new technolgies that had become available, it was a lot easier to record feedback and importantly distribute it to the learners. Recording audio was something that happened a lot in Universities for interviews and research, but it was usually to tape (as in cassette tape, I am sure there are a few people out there reading this, thinking to themselves, what is this “tape” you are writing about, is it “sticky tape”?) The downside of recording cassette tape was that they weren’t free, and was a logistical hassle in not just recording, but also getting the tape to the learner. In the last ten years, we have seen portable audio recording move away from cassette tapes to mp3 recorders. It is very easy to make an audio recording, save as mp3 and send it by e-mail A lot of smartphones now have that capability built in, though I am sure a lot of people will use specialised mp3 audio recorder. This change in technology made the concept of providing learners with audio feedback, not just practical, but also easily achievable. Bob and the team at Leeds Metropolitan were as a result able to undertake a detailed study of the issues and implications of providing audio feedback.
I do remember been quite taken by the idea and when I was back in the office went over the idea with a few members of staff who went off and had a go themselves.
Bob published a final report in 2009. The students feedback said
Students were overwhelmingly positive about receiving audio feedback on their coursework. They frequently remarked approvingly about its personal nature and the detail provided, evidence that the lecturer had carefully considered their work. On the other hand, a small minority of students said they preferred written feedback; a few asked for both audio and written comments on their work.
The final report is well worth reading.
On Friday the 6th July, there was a Teaching and Learning conference at Plymouth University. I wasn’t there but quite a few people I follow on Twitter were.
Steve Wheeler asked the question.
What about giving audio feedback on assessment? Many students are part of a listening culture. #ptlc12— Steve Wheeler (@timbuckteeth) July 6, 2012
Pete Yeomans responded
Remembering the Sounds Good project and a paper by JISC Digital Media I replied and posted the two links
That JISC Digital Media paper not only referenced the Sounds Good project, but also other similar work.
Chaing, Dr. I-Chant Andrea (2009). Which Audio Feedback is best?: Optimising audio feedback to maximise student and staff experience. Aberystwyth University
Bunyan, N, King, D & McGugan, S (2008). Does it make a difference? Replacing text with audio feedback. Practice and Evidence of Scholarship of Teaching and Learning in Higher Education. Vol. 3, No.2, pp. 125 – 163
Merry, S & Orsmond, P (2007) Students’ Attitudes to and Usage of Academic Feedback Provided Via Audio Files. Bioscience Education ejournal, Vol. 11.
I did then ask in the same tweet if we should…
…ignore all previous research?
What followed was a discussion on the value of that previous research. The question that was never answered, partly as I didn’t ask it, was having seen the outcomes of those audio feedback projects, were staff at Plymouth going to start using audio feedback, or were they going to do some more research on the effectiveness of audio feedback? I did feel from the responses I was getting that some people didn’t value the research and therefore were going to ignore it.
This happens all the time with regard to research in learning technologies and I am sure is pretty much the main reason change takes to long to happen and for various technologies to be adopted.
I would go further that mainstream adoption of learning technologies is rarely the result of what has been learnt through projects and research, but just “happens” slowly as teaching teams pick up technology from others who just so happen to use it, or they see others using it and decide to give it a go.
I would add that most decisions about learning technologies are probably made without any regard to the research about it, and is taken by IT directors or management teams based on what their competitors are doing or some article in a national newspaper. Okay that newspaper article may be based on a piece of research, but more likely was a PR piece from a technology company.
Think about all those technologies that are currently embedded into the institution, ignoring those that are used for administration, why were those technologies adopted and for what reason? Was the use of Powerpoint by teaching staff as a result of a range of research projects? On the contrary I would suspect very few staff have even looked at research into the use of Powerpoint, or presentation techniques, because if they had, we would never talk about “death by Powerpoint” and we would never complain about horrible slides at learning technology conferences.
So why don’t we trust what others have said and written about the use of learning technologies? Why do we think that our own institutions and learners are so different to others? Why don’t we learn from the research of others?
The end result, more often than not, is that there are no sudden moves to adopting new ways of working or embedding new technologies. Think about audio feedback, four years after the Sounds Good project, we are no closer to making use of the research, avoiding the lessons learnt from the project and wasting time and resources recreating or duplicating the work in our own pilots.
Are pilots just a way of playing with toys or are they an useful tool to support the embedding of new practices and technologies to enhance learning? If you are interested in discussing this further then I am running a symposium at ALT-C 2012 called Pilot Mentality. I have also written a previous post about running pilots.