It’s an extra, but does it need to be?

keyboardbw

Over the years I have spent a lot of time working with teachers helping them to embed digital technologies into their practice. I have also collaborated with colleges and universities and seen the strategies they use to embed digital. In an earlier post I described my journey and the approaches I have used for support and strategy. In this series of articles I am going to look at the process that many teachers use for teaching and learning and describe tools, services, but also importantly the organisational approach that can be used to embed the use of those tools into practice.

One challenge that is often faced when embedding learning technologies is that a lot of teaching staff see digital technology as something extra to do in their teaching. It’s a bolt-on, something extra to be done on top of the teaching and assessment workload.

Part of this has to be down to the way in which staff are introduced to or trained in the use of learning technologies. Staff attend a training session, or read an e-mail about some kind of new service or tool, or a new functionality and then are asked (usually politely and nicely) to start using as part of their work. The obvious reaction is that staff will see this as an extra.

Another part is down to the technocentric approach that is often used when talking about learning technologies, the training is focused on the technical approach to tools and services, this is how you use it and this is how it works.

At a simple level, even just uploading presentations to the VLE is an extra piece of work. Using a lecture capture system requires more effort than just the lecture. Using padlet to capture feedback requires more time than not capturing feedback.

This negative reaction to learning technologies, then extends to the use of other kinds of technolog, that can even save time, or isn’t recognised when the potential benefits are longer term.

So what can be done?

I don’t think there is an extra with the “this is an extra” model of staff development, it will certainly inspire and help those who want to engage with technology and those who can see the potential long term benefits. However in order to engage those staff for whom it is an “extra” this different approaches need to be considered and used.

There isn’t anything wrong about the technocentric approach, despite what the “pedagogy first” brigade may tell you, however the focus needs to be on the potential of the technology, what it can do, what is can provide and what the benefits are. The technical processes can be covered as well, but put the focus on what the benefits are for the member if the staff and importantly the learners.

Another method is to focus on the processes and workflows that staff have and see how technology can improve, enhance and smooth out those processes.

Finally what about the affordances that new technologies can bring, the potential not to just change what we do, but allow us to do things we had never considered.

So what strategies do you use to engage staff who see embedding technology as an extra?

Open the pod bay doors…

People are not fearful of algorithms, they’re fearful of agendas that the algorithms represent.

2001-a-space-odyssey

Over the last few weeks I have been discussing and listening to people’s views on the intelligent campus.

One topic which has resulted in a fair bit of controversy is the concept of using artificial intelligence to support teaching and learning. This isn’t some kind of HAL 9000 computer running the campus and refusing to open the library doors until Dave the learner has finished their essay. This is more about a campus system being able to learn from the users, take that data, do some analysis and make suggestions to the user on potential ideas for improvement and useful interventions.

Imagine a learner arriving at campus with the intention of writing an essay, needing a quiet place in which to do this. They check their Campus App on their smartphone and it recommends a location based on the ambient noise levels and the type of environment the learner has used before. It could take into account the distance from the coffee shop, depending on if coffee is used as a distraction or supports the learner in writing their essay. The learner can of course ignore all this and just go to where they want to, the app provides informed guidance and learns as the learner does more learning activities and which spaces they use.

Another scenario, is a teacher planning a session, with some relatively interactive and engaging learning activities. They ask the intelligent campus where is the best place for this to happen! The system takes on board the preferences of the teacher, the availability of rooms, information from previously successful similar sessions and any feedback from learners. The teacher can then make an informed choice about the best space for this session. After the learning, the system asks for feedback so that it can learn from and improve the decisions it makes.

I think some of the issues (or should we call them problems and fears) that people have with a concept such as this is they feel any such algorithm is secret and hidden and will have a built in bias.

hal-9000-reflecting-daves-entry-in-stanley-kubricks-2001-a-space-odyssey

As I wrote in my previous blog post on open analytics I said

So if we are to use algorithms to manage the next generation of learning environments, the intelligent campus, support for future apprenticeships and data driven learning gains, how can we ensure that we recognise that there is bias? If we do recognise the bias, how do we mitigate it? Using biased algorithms, can we make people aware that any results have a bias, and what it might mean?

People are not fearful of algorithms, they’re fearful of agendas that the algorithms represent. But if we make these algorithms open and accessible, we could mitigate some of those concerns.

So if we are to use algorithms to support teaching and learning, could we, by making the algorithms open, ensure that, we remove some of those fears and frustrations people have with a data approach? By making the algorithms open could we ensure that staff and learners could see how they work, how they learn and why they produce the results that do?

This does bring up another issue that people have mentioned which is the bias any algorithm has, bias which comes from the people who write it, sometimes consciously, sometimes unconsciously. There is an assumption that these algorithms are static and unchanging and written by people who have an agenda. As we know from using Google and other algorithms, these are constantly changing and being tweaked.

Could we go one step further and allow people to edit or even create their own algorithms? Allowing them to make suggestions on how they could be improved, creating new analytics that could benefit the wider community.

We need to embrace the benefits of a smart campus, because the technology is already here, but we need one which learns from the people who use it; we need to ensure that those people are the ones who inform and guide the development of that learning. They are able to and can decide which intelligent campus decisions to benefit from and which they can ignore. By making the whole process open and accessible, we can provide confidence in the decision making, we can feel we can trust those decisions. We mustn’t forget that giving them that literacy in this area, is perhaps the most important thing of all.

Remember that in both scenarios above the learner and teacher both, ultimately, have the decision to ignore the intelligent campus decisions, they can decide themselves to close the pod bay doors.

Opening the algorithms: Could we use open analytics?

globe

Do you remember when the Google algorithm wasn’t that good, well it was good, but today it’s better!

Many years ago if you searched for a hotel on Google, so you could find out if there was car parking, or to find the website for the restaurant menu, the search results most of the time were not the hotel website, but hotel booking sites offering cheap hotel rooms. Pointless if you already had a room, and all you wanted to know if you had to pay for car parking, or what time you could check out. The problem was that the hotel booking sites worked out how the Google search algorithm ranked sites and “gamed” Google search.

Today, the experience is very different, the algorithm usually results in the actual hotel website being the top hit on any search for a specific hotel.

Google had worked on the algorithm and ensured what they saw as the correct search result was the one that was at the top.

One thing that many people don’t realise was that Google not only worked on the software behind the algorithm, but that they also use human intervention to check that the algorithm was providing the search results they thought it should be. If you wonder why Google search is better than search functions on your intranet and the VLE this is probably why, Google use people to improve search results. Google uses people to both write the algorithms and to tweak the search results. Using people can result in bias.

laptop

So if we are to use algorithms to manage the next generation of learning environments, the intelligent campus, support for future apprenticeships and data driven learning gains, how can we ensure that we recognise that there is bias? If we do recognise the bias, how do we mitigate it? Using biased algorithms, can we make people aware that any results have a bias, and what it might mean? If we are to, like Google, use human intervention, how is that managed?

The one aspect of Google’s search algorithm that some people find frustrating is that the whole process is secret and closed. No one, apart from the engineers at Google really knows how the algorithms were written and how they work, and what level of human intervention there is.

So if we are to use algorithms to support teaching and learning, could we, by making the algorithms open, ensure that, we remove some of those fears and frustrations people have with a data approach? By making the algorithims open could we ensure that staff and learners could see how they work, how they learn and why they produce the results that do?

Could we go one step further and allow people to edit or even create their own algorithms? Allowing them to make suggestions on how they could be improved, creating new analytics that could benefit the wider community.

Is it time for open analytics?

Thank you to Lawrie Phipps for the conversations we had after the Digital Pedagogy Lab: Prince Edward Island conference and this blog post.

Hey Siri, how am I doing on my course?

Of course if you ask Siri today, her reply is, well it depends…

siri002

Another response to the same question…

siri001

As for tomorrow…. well could it be different?

However for other things intelligent assistants such as Siri, Cortana and Google Now can be a really useful mechanism for finding things out, things you need to do and in many instances fun stuff too.

I have been wondering if we could utilise these technologies and apply it to learning.

We know that most (if not all) people can usually tell when using a chat function on a website when they are talking to a chat bot and when they are talking to a real person.

Could we just not even try and think that the intelligent agent is real and actually play up to the fact it is artificial. Give it a name and even create a personality for it?

This way we could avoid some of those ethical issues about a piece of software trying to pretend to be human. The students know it is not real and it is advertised accordingly.

But could such a tool be an useful thing for students to have access to? One that could learn from its experiences and fine tune it’s responses and interventions?

It doesn’t necessarily have to be voice controlled, users could use text entries.

So what kind of questions could learners ask?

What is the best book for this module?

Where is a quiet place to study on campus?

What other questions could we expect learners to be asking?

Making the move from the radio…

Old Radio

At the recent AELP Autumn Conference I was having a discussion about the challenges that online delivery can be for training providers and employers. Often they have excellent instructors and trainers who work well in face to face classroom situations.

There is often an assumption that is made that because someone is excellent in face to face learning scenarios, they will be able to easily transfer these skills into an online environment, as the scenarios are very similar.

This is quite a risky assumption to make, as though there are similarities in delivering learning in classrooms and online, they are not the same.

It was and can be challenging for radio personalities to move into television, even though both broadcast mediums, and there are similar programmes on both (think News Quiz and Have I Got News For You) the skills for the different media are quite different.

In a similar vein, many stars of the silent cinema were unable to make the move to the talkies. Those that did, certainly thrived, those that couldn’t, didn’t!

If we are to make the move to a truly digital apprenticeship experience for all apprentices, then we need to ensure that the instructors and trainers involved in the delivery of learning have the right capabilities and skills to deliver effectively online.

Having the digital confidence, capacity and capability is something that often needs to be built in those staff who may already have excellent skills in delivering learning in face to face scenarios. Certainly there are many things which are transferable, but the skills in facilitating a classroom discussion are different to those in running a debate in an online forum.

So the question is, how do we build that digital capability? How are you building digital and online skills? What are you doing to ensure the successful transition to online delivery?

Image Credit: Old Radio by idban secandri CC BY-NC-SA 2.0

news and views on e-learning, ILT and tech stuff in general…