One issue I have found with TEL research that is written about in journals or presented about at conferences, is that most if not all is based around small cohorts of learners and rarely looks at the impact across a whole organisation.
Sometimes the TEL research appears to be about doing research and not necessarily thinking about the scaling up and mainstreaming of the research at a later date.
Often the research has minimal funding, which means that you are forced to use a smaller number of learners. Often the lack of funding means that though there is existing research out there, rather than scale up from that, the research is duplicated (again with a smaller cohort).
It needs to be noted that sometimes TEL research shouldn’t be scaled up straight away because that may have a detrimental impact on the learners.
One lesson I would pass on, is if you are undertaking a small scale pilot is to reflect on how it would scale in the future. It’s not just about how the technology, device or process enhances and enriches teaching and learning, but what about all the other stuff. Logistics, charging, storage, training, support, staff development, sustainability, and end of life.
When the iPad was released in 2010, there were lots of iPad pilots undertaken by universities and colleges.
Most of these didn’t seem to take into account the challenges that a large scale roll-out of iPads would require. Where would the iPads be stored, how would they be charged? When it was released there were no charging carts available. I once asked a charging case company to provide a case for Nintendo DS, they couldn’t as there wasn’t the demand for one, and it was too expensive to test just for a single order. The end result was multiple six way gangs to charge the class set; it was no wonder so few people used them.
Another challenge with iPads was how a device designed for an individual could be use by multiple people. If you had a few you could reset them individually, fine if you had ten, impossible if you had a hundred. Then there was the challenge of getting apps on to them. You could sync multiple iPads to a single iTunes account, fine again if you had ten, not feasible if you had a few hundred! Eventually Apple released the management software to manage multiple iPads and the licensing platform to licence apps across multiple iPads.
However those initial iPad pilots weren’t always thinking in terms of that bigger picture, so weren’t able to scale effectively until those multiple iPad issues were resolved.
So if you are looking at a pilot, consider the following:
If it involves a device to a technology, what about the storage, where will they live when not being used? How will they be charged? What will be the process for booking them? Even if you have a small number consider the perspective if you had enough for everyone!
If it is a new process (or web based tool) you may be able to train the ten staff in the pilot, what about training the staff across the whole organisation, how is that going to happen and who is going to do it. Thinking about that at the pilot stage means that if the pilot is successful, it will be easier to scale and mainstream later. Similar considerations about staff development and support when things don’t go as planned.
There is also considerations of sustainability, whose budget will those costs by placed in after the pilot? Are they onboard?
What about end of life, equipment replacement, where will the funding for all that come from?
Small scale pilots are useful, but thinking about scale and mainstreaming early on will avoid major headaches and challenges later.
Four years ago, March 2008, I was at a JISC Experts Group meeting at The Watershed Media Centre in Bristol. There was a range of interesting sessions, one I remember well was a report from Bob Rotheram, National Teaching Fellow at Leeds Metropolitan University on the Sounds Good project.
This session will give members the experience of receiving audio feedback on assessed work, learn about the Sounds Good project funded under the JISC Users and Innovations Programme, and consider the potential of this approach.
This was a really interesting project that was looking at the use of audio feedback for student assessment. With the new technolgies that had become available, it was a lot easier to record feedback and importantly distribute it to the learners. Recording audio was something that happened a lot in Universities for interviews and research, but it was usually to tape (as in cassette tape, I am sure there are a few people out there reading this, thinking to themselves, what is this “tape” you are writing about, is it “sticky tape”?) The downside of recording cassette tape was that they weren’t free, and was a logistical hassle in not just recording, but also getting the tape to the learner. In the last ten years, we have seen portable audio recording move away from cassette tapes to mp3 recorders. It is very easy to make an audio recording, save as mp3 and send it by e-mail A lot of smartphones now have that capability built in, though I am sure a lot of people will use specialised mp3 audio recorder. This change in technology made the concept of providing learners with audio feedback, not just practical, but also easily achievable. Bob and the team at Leeds Metropolitan were as a result able to undertake a detailed study of the issues and implications of providing audio feedback.
I do remember been quite taken by the idea and when I was back in the office went over the idea with a few members of staff who went off and had a go themselves.
Students were overwhelmingly positive about receiving audio feedback on their coursework. They frequently remarked approvingly about its personal nature and the detail provided, evidence that the lecturer had carefully considered their work. On the other hand, a small minority of students said they preferred written feedback; a few asked for both audio and written comments on their work.
The final report is well worth reading.
On Friday the 6th July, there was a Teaching and Learning conference at Plymouth University. I wasn’t there but quite a few people I follow on Twitter were.
Steve Wheeler asked the question.
What about giving audio feedback on assessment? Many students are part of a listening culture. #ptlc12
Chaing, Dr. I-Chant Andrea (2009). Which Audio Feedback is best?: Optimising audio feedback to maximise student and staff experience. Aberystwyth University
Bunyan, N, King, D & McGugan, S (2008). Does it make a difference? Replacing text with audio feedback. Practice and Evidence of Scholarship of Teaching and Learning in Higher Education. Vol. 3, No.2, pp. 125 – 163
Merry, S & Orsmond, P (2007) Students’ Attitudes to and Usage of Academic Feedback Provided Via Audio Files. Bioscience Education ejournal, Vol. 11.
I did then ask in the same tweet if we should…
…ignore all previous research?
What followed was a discussion on the value of that previous research. The question that was never answered, partly as I didn’t ask it, was having seen the outcomes of those audio feedback projects, were staff at Plymouth going to start using audio feedback, or were they going to do some more research on the effectiveness of audio feedback? I did feel from the responses I was getting that some people didn’t value the research and therefore were going to ignore it.
This happens all the time with regard to research in learning technologies and I am sure is pretty much the main reason change takes to long to happen and for various technologies to be adopted.
I would go further that mainstream adoption of learning technologies is rarely the result of what has been learnt through projects and research, but just “happens” slowly as teaching teams pick up technology from others who just so happen to use it, or they see others using it and decide to give it a go.
I would add that most decisions about learning technologies are probably made without any regard to the research about it, and is taken by IT directors or management teams based on what their competitors are doing or some article in a national newspaper. Okay that newspaper article may be based on a piece of research, but more likely was a PR piece from a technology company.
Think about all those technologies that are currently embedded into the institution, ignoring those that are used for administration, why were those technologies adopted and for what reason? Was the use of Powerpoint by teaching staff as a result of a range of research projects? On the contrary I would suspect very few staff have even looked at research into the use of Powerpoint, or presentation techniques, because if they had, we would never talk about “death by Powerpoint” and we would never complain about horrible slides at learning technology conferences.
So why don’t we trust what others have said and written about the use of learning technologies? Why do we think that our own institutions and learners are so different to others? Why don’t we learn from the research of others?
The end result, more often than not, is that there are no sudden moves to adopting new ways of working or embedding new technologies. Think about audio feedback, four years after the Sounds Good project, we are no closer to making use of the research, avoiding the lessons learnt from the project and wasting time and resources recreating or duplicating the work in our own pilots.
Are pilots just a way of playing with toys or are they an useful tool to support the embedding of new practices and technologies to enhance learning? If you are interested in discussing this further then I am running a symposium at ALT-C 2012 called Pilot Mentality. I have also written a previous post about running pilots.
I am currently putting together an abstract for a debate at ALT-C on the value of pilots and projects. This is something I blogged about before.
The essence of the debate is spread across two viewpoints.
Pilots and projects represent value for money and are a valuable tool in evaluating, experimenting and reflecting on the use of new pedagogies and learning technologies. They are a key part of embedding organisational change.
Pilots and projects are an inefficient method for the mainstream adoption and embedding of new pedagogies and learning technologies. They are of little value to organisations and are often used as part of a cycle of funding rather than organisational change.
After posting my initial idea on the Twitter I think that this would be an interesting debate and builds on discussions in this area at previous conferences.
So where next?
Well we need a chair and a panel. I am hoping to speak about the inefficiencies of every organisation undertaking pilots and projects and the need to learn from the research and pilots undertaken elsewhere. I would like a varied panel, so if you are interested in taking part (and will be attending ALT-C 2012) let me know, either in the comments or firstname.lastname@example.org Please note that you either need to be working in an FE or HE institution, or for one of the sector agencies such as JISC, CETIS, HEA, LSIS, etc…
So are you thinking about running a pilot or a trial?
How many pilots do we need? Or is it more a question that we need to run a pilot at our institution before we think about “rolling” it out across all curriculum areas. I am also aware of successful pilots in one curriculum area which have been followed by virtually identical pilots in a second curriculum area… Why? Well the learners are different! Really! How different, they have two heads or something? That actually raises a question on any pilot, well successful pilots have resulted in a roll out across the whole institution?
We do see institutions that use tools such as Powerpoint across the institution, similarly we see some institutions have embedded the use of the VLE. However was this via projects and pilots? Or was it something different?
Do pilots actually help institutions move forward in using learning technologies or are they causing problems rather than solutions?
Do you read about pilots and projects from other organisations? Do you follow their advice when implementing new technologies or do you decide to run your own pilot? If we don’t learn from pilots that others do, is there any point in doing or talking about pilots?
I also had a recent conversation where the institution was going to do a pilot as it couldn’t afford a mainstream rollout of the technology. Now this I really didn’t understand, you already know from the research undertaken that the technology works and has a positive impact, however rather than buy enough for the institution you’re only going to buy enough to repeat the pilot already done. Why couldn’t they buy enough? Well they weren’t given the funding.
Maybe the question is, why aren’t the people who are making the financial decisions reading the research and project outcomes?
Personally my view is that if there is only enough money for a pilot, it’s probably not worth doing and you would be better off spending the money on reinforcing and enhancing the use of a technology you already have. However many might see that as boring.
I thought I would mention some of things I have done at my institution in relation to the introduction of new technologies and the impact they have had.
When the JISC Collections e-Books for FE announcement was made, I immediately signed the college up. I recall talking to a colleague who said “so which group of students should we pilot this with”. I thought for a minute and wondered why we needed to do a pilot or a trial. Hadn’t JISC Collections already done that, seen the need to provide the collection and given us an opportunity. So I replied, “no we’re not going to do a pilot, we’re going to launch it for all learners and tell everyone about it, the pilot projects have already been done by JISC, e-books do work, they support, enhance and enrich learning, why on earth would we want to repeat that work, to get the same results, oh and get no funding to do it?” As a result of the mainstream launch of the e-books into the college, we now have learners and practitioners using e-books to support their learning. No need to do a pilot, we knew it worked elsewhere, so why wouldn’t it work at our college?
I could go on about Flip’ping Pilots, but when an opportunity came to purchase some SD card based video cameras, rather than buy a set of 15 and see how they worked out with groups, we purchased over 300 cameras. The result was just what I expected. More practitioners creating and using video in their teaching. Learners using video for assessment and reflection. Availability of the cameras was the real issue, having lots of them meant that whenever someone wanted to use one, either they had one in their pocket or could get hold a class set really easily. Was I concerned about spending that amount of money on cameras that wouldn’t be used? Well probably slightly, however pilots and projects done elsewhere had demonstrated again and again that video had had a really positive impact on teaching and learning, so why wouldn’t it work at our college?
I remember seeing a demonstration of Activexpression by Promethean at my college and been very impressed, the main reason I liked the system over other “clicker” systems including the Promethean Activote was that you could use the system without needing to spend ages preparing the questions in advance.
However another thing I knew, from reading about projects that had implemented clickers in other institutions was that staff didn’t use their sets of clickers very much because they weren’t sure if they would be available, but when they did use them they really thought they worked effectively. The lesson was simple, ensure you have enough clickers available. We also had a need to make assessment more engaging and “fun”, clickers or voting units seemed like an ideal solution based on the work done elsewhere. So once more when some funding was available, we purchased 1500 Activexpression handsets, nearly enough for a hundred classes! They were made available in a range of departments. The result? Well most of the sets were used and used on a regular basis to the point where they are embedded into practice. However I should say not all departments engaged with the technology and some were left in cupboards. However after a period of implementation and relection we relocated the sets not been used. The result was across many curriculum areas the clickers were been actively used to enhance and enrich learning. I had seen the results of many pilots and projects that had used clickers and voting units, so why wouldn’t it work at our college?
When the iPad first came out, I didn’t think it was going to be the radical device for me that it has eventually come to be. In the end I was really impressed with the device and how it improved my efficiency and workflows for my job. As a result I bought every member of the management team in my centre an iPad. As well as the Libraries and e-Learning, my centre includes Construction, Engineering and Schools Liaison. I certainly didn’t see this as a pilot or a project, much more about them benefiting from the lessons I had learnt. I have had quite a few people in the college come and ask me to provide them with iPads (like I have the budget for that) or have asked to “pilot” them with a group of their learners. As far as I am concerned there have been lots of iPad pilots and projects elsewhere in the world and my college doesn’t need to repeat those experiences, the lessons have been published, the problems identified and many of the issues resolved. For me the question is now, now are iPads useful or will they enhance and enrich learning, no the question for me is, will iPads solve a specific problem we have in the college, will they increase retention and achievement for a particular cohort? If I can answer those questions I can then ask the question will the cost of the iPads be outweighed by the benefit they will bring? We don’t have that many iPads at my college, those that do have them, find they are really useful and have had quite an impact on their work. Elsewhere other iPad projects have demonstrated the value they can bring to learning, so why wouldn’t it work at our college?
So with all the wonderful stuff that has been discussed at various conferences and events, I wonder how many of you are thinking about your next project, your next pilot, your next research grant bid… Do I only want to do a pilot because a) everyone else is doing a pilot and b) it means I get an exciting new gadget to play with c) I need to be seen to be doing new and innovative stuff. Pilots are fun, aren’t they?
Or are you thinking differently, thinking about why wouldn’t this work at my place? Why can’t I do a mainstream roll out of this new technology.
Are you thinking differently?
news and views on e-learning, TEL and learning stuff in general…