Over the last few years I have been, rather than taking notes in the keynotes (and other sessions) at the ALT Conference #altc drawing pictures. This is sometimes called sketchnoting.
My sketch notes are really for me, rather than other people. The process of sketching allows my to digest for myself what is been talked about and demonstrated. The sketch note provides me with a mechanism that provides a process for my interpretation of what is being said and what I understand from the talk. The process of sketching engages me in the talk in ways in which note taking does for others, or conversing on the Twitter. They are not done for other people, if other people find them useful then that’s just a bonus. Having said that I do share them online, through Twitter (and Flickr).
One issue I have found with TEL research that is written about in journals or presented about at conferences, is that most if not all is based around small cohorts of learners and rarely looks at the impact across a whole organisation.
Sometimes the TEL research appears to be about doing research and not necessarily thinking about the scaling up and mainstreaming of the research at a later date.
Often the research has minimal funding, which means that you are forced to use a smaller number of learners. Often the lack of funding means that though there is existing research out there, rather than scale up from that, the research is duplicated (again with a smaller cohort).
It needs to be noted that sometimes TEL research shouldn’t be scaled up straight away because that may have a detrimental impact on the learners.
One lesson I would pass on, is if you are undertaking a small scale pilot is to reflect on how it would scale in the future. It’s not just about how the technology, device or process enhances and enriches teaching and learning, but what about all the other stuff. Logistics, charging, storage, training, support, staff development, sustainability, and end of life.
When the iPad was released in 2010, there were lots of iPad pilots undertaken by universities and colleges.
Most of these didn’t seem to take into account the challenges that a large scale roll-out of iPads would require. Where would the iPads be stored, how would they be charged? When it was released there were no charging carts available. I once asked a charging case company to provide a case for Nintendo DS, they couldn’t as there wasn’t the demand for one, and it was too expensive to test just for a single order. The end result was multiple six way gangs to charge the class set; it was no wonder so few people used them.
Another challenge with iPads was how a device designed for an individual could be use by multiple people. If you had a few you could reset them individually, fine if you had ten, impossible if you had a hundred. Then there was the challenge of getting apps on to them. You could sync multiple iPads to a single iTunes account, fine again if you had ten, not feasible if you had a few hundred! Eventually Apple released the management software to manage multiple iPads and the licensing platform to licence apps across multiple iPads.
However those initial iPad pilots weren’t always thinking in terms of that bigger picture, so weren’t able to scale effectively until those multiple iPad issues were resolved.
So if you are looking at a pilot, consider the following:
If it involves a device to a technology, what about the storage, where will they live when not being used? How will they be charged? What will be the process for booking them? Even if you have a small number consider the perspective if you had enough for everyone!
If it is a new process (or web based tool) you may be able to train the ten staff in the pilot, what about training the staff across the whole organisation, how is that going to happen and who is going to do it. Thinking about that at the pilot stage means that if the pilot is successful, it will be easier to scale and mainstream later. Similar considerations about staff development and support when things don’t go as planned.
There is also considerations of sustainability, whose budget will those costs by placed in after the pilot? Are they onboard?
What about end of life, equipment replacement, where will the funding for all that come from?
Small scale pilots are useful, but thinking about scale and mainstreaming early on will avoid major headaches and challenges later.
I think this line is really interesting from a recent discussion on the ALT Members mailing list.
…in particular to share these with academics when they ask for the evidence to show technology can make a difference.
Often when demonstrating the potential of TEL and learning technologies to academics, the issue of evidence of impact often arises.
You will have a conversation which focuses on the technology and then the academic or teacher asks for evidence of the impact of that technology.
From my experience when an academic asks for the evidence, then the problem is not the lack of evidence, but actually something else.
Yes there are academics who will respond positively when shown the “evidence”, however experience has taught me that even when that happens then there is then another reason/problem/lack of evidence that means that the academic will still not start to use technology to “make a difference”.
When an academic asks “for the evidence to show technology can make a difference” the problem is not the lack of evidence, but one of resistance to change, fear, culture, rhetoric and motivation.
You really need to solve those issues, rather than find the “evidence”, as even if you find the evidence, you will then get further responses such as, wouldn’t work with my students, not appropriate for my subject, it wouldn’t work here, it’s not quite the same, not transferable…. etc…
Despite years of “evidence” published in a range of journals, can studies from Jisc and others, you will find that what ever evidence you “provide” it won’t be good enough, to justify that academic to start embedding that technology into their practice.
As stated before, when someone asks for the “evidence” more often then not this is a stalling tactic so that they don’t have the invest the time, energy and resources into using that technology.
Sometimes it can be “fear” as they really don’t have the capabilities to use technology and lack the basic ICT confidence to actually use various learning technologies, and as a result rather then fess up their lack of skills, they ask for the “evidence”, again to delay things.
Just turn it around, when you ask those academics who do use technology then, you find that the “evidence” generally plays little or no part in their decisions to make effective use of technology.
So what solutions are there to solve this issue? Well we need to think about the actual problems.
A lot of people do like things to remain as they are, they like their patterns of work, they like to do what they’ve always done. This is sometimes called resistance to change, but I think it’s less resistance to change, and more sticking to what I know. I know what works, it works for me, and anything else would require effort. This strikes me more about culture, a culture where improvement, efficiency and effectiveness are seen as not important and the status quo is rarely challenged.
Unless an organisation is focused strategically and operationally in improvement, widening participation, becoming more efficient, then it is hard to get people to think about changing their practice.
When it comes to embedding learning technologies we often talking about changing the culture of an organisation. This can be hard, but doesn’t necessarily have to be slow. I am reminded of a conversation with Lawrie Phipps though in which he said we have to remember that academics often like the current culture, it’s why they work in that place and in that job. So don’t be surprised when you are met with resistance!
Creating a culture which reflects experimentation, builds curiosity and rewards innovation, isn’t easy, but also isn’t impossible. There are various ways in which this can be done, but one lesson I have learnt in making this happen, is that the process needs to be holstic and the whole organisation needs to embrace that need to change the culture. What I have found that you need to identify the key stakeholders in the organisation, the ones who actually have the power to make change happen. I found in one college I worked in that the real “power” wasn’t with the Senior Leadership Team (who often had the same frustrations I had when it came to change) but the Heads of Faculty, the managers who led and managed the curriculum leaders. They had the power to make things happen, but they didn’t always realise they held that power.
Getting the rhetoric right, but also understood across the organisation is critical for success in embedding learning technologies. Often messages are “broadcast” across an organisation, but staff don’t really understand what is meant by them and many staff don’t think it applies to them. Getting a shared understanding what is required from a key strategic objective is challenging. I have done this exercise a few times and it works quite well, pick a phrase from your strategic objectives and ask a room of staff or managers what it means and to write it down individually. You find that everyone usually had a different understanding of what it means. A couple of examples to try include buzz phrases such as “the digital university” and “embrace technology”.
Finally looking at what motivates people to use technology to improve teaching, learning and assessment.
When I was teaching, I would often experiment with technology to see if it made a difference, if it did, I adopted it, if it didn’t I stopped using it. The impact on the learners was minimal, as I didn’t continue to use technology that didn’t make a difference or was even having a negative impact. What I also did was I applied the same process and logic to all my teaching. So when I created games to demonstrate various economic processes, if they made a difference I used them again, if they didn’t then I would ask the learners how they would change or improve them. When I gave out a reading list of books, I would ask the learners for their feedback and, those that didn’t make a difference or had no positive impact, then they would be removed from the list! I was personally motivated, but we know you can’t just make that happen.
When I was managing a team I ensured that any experimentation or innovation was part of their annual objectives and created SMART actions that would ensure they would be “motivated” to do this. Again you need to identify the key stakeholders in the organisation, the ones who actually have the power to make this happen.
So when someone asks you to show them the evidence what do you do?
news and views on e-learning, TEL and learning stuff in general…