Tag Archives: analytics

The Intelligent Library #CILIPConf17

So what is the intelligent library? What is the future of the library?

library

At the CILIP Conference in Manchester this year, on Thursday 6th July, I am delivering a high level briefing session on technology, specifically looking at the library within the intelligent campus space. The session will explore the potential technologies and the possibilities that can arise from the developments in artificial intelligence and the internet of things.

There has been plenty of hype over artificial intelligence and the internet of things. Is it time to put aside the cynicism that this kind of hype generates and look seriously at how we can take advantage of these emerging technologies to improve the student experience and build an intelligent library?

library

The internet of things makes it possible for us to gather real-time data about the environment and usage of our library spaces. It is easy to imagine using this data to ensure the library is managed effectively, but could we go further and monitor environmental conditions in the library, or even, using facial recognition software, student reactions as they use the library so that we can continually refine the learning experience?

Most smartphones now make use of artificial intelligence to make contextual recommendations based on an individual’s location and interests. Could libraries take advantage of this technology to push information and learning resources to students? If we could, it offers some interesting possibilities. On-campus notifications could nudge students to make best use of the available services such as the library. Off-campus notifications could encourage them to take advantage of the learning opportunities all around them. Could we use approaches like this to turn student’s smartphones into educational coaches, nudging students towards the choices that lead to higher grades and prompting them to expand their learning horizons.

As we start to use a range of tracking technologies, smart cards, beacons, sensors we are facing a deluge of data in the use of buildings, spaces and equipment across a college or university campus. We are faced with a breadth and depth of data which can be challenging to use effectively and have greatest impact. These tracking technologies are already widespread in environments such as airports and retail. Often using wifi tracking to track users via their wifi enabled devices and smartphones. In addition sensors are used to track space utilisation and occupancy. Interpreting the data is fraught with challenges and difficulties, as well as potential ethical and legal issues. However this wealth of data does offer the potential to deliver more satisfying experiences for students and staff as well as ensuring the library is used as effectively as possible.

books

Looking in more detail we can outline some potential use cases for the intelligent library, and we may want to think which of these are desirable, but also which are possible with the current state of technology.

We can imagine an intelligent library which not only knows what seats and PCs are free, but can learn from history and predict when the library will be busy and when it will be emptier. The library then provides this information to students via an app, pushing the library when there is more availability of places and computers.

Having a deeper understanding of the utilisation of the library, will allow for more effective and efficient use of space. Could this also mean we have a flexible library that expands and contracts as demand for space in the library changes over the day or over the year?

Could we use wireless technologies, such as RFID, not just for issue and return, but also track those resources as they are used within the library itself? Could we also use the same technologies to track resources across campus to identify areas where they are being used or stored (or even lost)? Could we then enhance those spaces to improve learning?

Could we use facial recognition to monitor regular users of the library and provide insight and data into learning analytics? Could we go one step further and use facial recognition technology to discover when students are “troubled” or “in need of help” and then make appropriate interventions to support them in their studies?

books

If the library is getting full, could we identify those students who have been in there a long time, and push a notification, incentivising them to take a break with a free coffee from the library coffee shop? Could we go one step further, and promote wellbeing, by doing the same, but with a free coffee on the other side of campus, so they have to go outside and get some air and exercise?

using a mobile phone and a laptop

Is there any benefit in providing a platform to help gather this data from a range of systems in a standard format that makes it easier to analyse and act upon? Would it be useful to have a national view over this data? Would that enable us to find new patterns that could help us discover the story behind the data, to make appropriate interventions and improve the use of our libraries? Could we build the tools and practices an institution would need to use to gather, organise and push this data to student’s smartphones as well as exploring novel user interfaces such as chatbots?

Of course all this tracking and data collection has huge implications from an ethical perspective. We already gather large amounts of data in the library, sometimes this is anonymised, but sometimes it relates to individuals. At a basic level, we have seen physical counters to determine the number of users in the library, as well as using library management systems to gather data about the usage of resources and the borrowing behaviour of library users. The intelligent library as outlined above takes all this initial tracking of users one step further.

library

As the technology in the intelligent library space grows, we need to consider various questions on why we want to use these technologies, how we use them and if we should. We already use a range of systems to collect data, do we want to put in new systems to gather even more data? Some data we need to collect regardless of our concerns, a library management system by definition will collect and store huge amounts of data about resources and users. What happens less often now, but may increase in the future is the processing of that data. This is the analysis of that data and displaying that data in a way that shows a picture. The next step is taking action about what that data shows. It could be an organisational action, but could equally be action related to an individual user. How do we ensure that we have consent to collect data (sometimes this is implicit by using the library), how do we ensure we have consent for us to process that data and finally do we have constant to take action on that data?

What is the future of the library? This session at the CILIP Conference will explore the potential technologies and the possibilities that can arise from the developments in artificial intelligence and the internet of things. Can we build an intelligent library? Do we want to?

Open the pod bay doors…

People are not fearful of algorithms, they’re fearful of agendas that the algorithms represent.

2001-a-space-odyssey

Over the last few weeks I have been discussing and listening to people’s views on the intelligent campus.

One topic which has resulted in a fair bit of controversy is the concept of using artificial intelligence to support teaching and learning. This isn’t some kind of HAL 9000 computer running the campus and refusing to open the library doors until Dave the learner has finished their essay. This is more about a campus system being able to learn from the users, take that data, do some analysis and make suggestions to the user on potential ideas for improvement and useful interventions.

Imagine a learner arriving at campus with the intention of writing an essay, needing a quiet place in which to do this. They check their Campus App on their smartphone and it recommends a location based on the ambient noise levels and the type of environment the learner has used before. It could take into account the distance from the coffee shop, depending on if coffee is used as a distraction or supports the learner in writing their essay. The learner can of course ignore all this and just go to where they want to, the app provides informed guidance and learns as the learner does more learning activities and which spaces they use.

Another scenario, is a teacher planning a session, with some relatively interactive and engaging learning activities. They ask the intelligent campus where is the best place for this to happen! The system takes on board the preferences of the teacher, the availability of rooms, information from previously successful similar sessions and any feedback from learners. The teacher can then make an informed choice about the best space for this session. After the learning, the system asks for feedback so that it can learn from and improve the decisions it makes.

I think some of the issues (or should we call them problems and fears) that people have with a concept such as this is they feel any such algorithm is secret and hidden and will have a built in bias.

hal-9000-reflecting-daves-entry-in-stanley-kubricks-2001-a-space-odyssey

As I wrote in my previous blog post on open analytics I said

So if we are to use algorithms to manage the next generation of learning environments, the intelligent campus, support for future apprenticeships and data driven learning gains, how can we ensure that we recognise that there is bias? If we do recognise the bias, how do we mitigate it? Using biased algorithms, can we make people aware that any results have a bias, and what it might mean?

People are not fearful of algorithms, they’re fearful of agendas that the algorithms represent. But if we make these algorithms open and accessible, we could mitigate some of those concerns.

So if we are to use algorithms to support teaching and learning, could we, by making the algorithms open, ensure that, we remove some of those fears and frustrations people have with a data approach? By making the algorithms open could we ensure that staff and learners could see how they work, how they learn and why they produce the results that do?

This does bring up another issue that people have mentioned which is the bias any algorithm has, bias which comes from the people who write it, sometimes consciously, sometimes unconsciously. There is an assumption that these algorithms are static and unchanging and written by people who have an agenda. As we know from using Google and other algorithms, these are constantly changing and being tweaked.

Could we go one step further and allow people to edit or even create their own algorithms? Allowing them to make suggestions on how they could be improved, creating new analytics that could benefit the wider community.

We need to embrace the benefits of a smart campus, because the technology is already here, but we need one which learns from the people who use it; we need to ensure that those people are the ones who inform and guide the development of that learning. They are able to and can decide which intelligent campus decisions to benefit from and which they can ignore. By making the whole process open and accessible, we can provide confidence in the decision making, we can feel we can trust those decisions. We mustn’t forget that giving them that literacy in this area, is perhaps the most important thing of all.

Remember that in both scenarios above the learner and teacher both, ultimately, have the decision to ignore the intelligent campus decisions, they can decide themselves to close the pod bay doors.

Opening the algorithms: Could we use open analytics?

globe

Do you remember when the Google algorithm wasn’t that good, well it was good, but today it’s better!

Many years ago if you searched for a hotel on Google, so you could find out if there was car parking, or to find the website for the restaurant menu, the search results most of the time were not the hotel website, but hotel booking sites offering cheap hotel rooms. Pointless if you already had a room, and all you wanted to know if you had to pay for car parking, or what time you could check out. The problem was that the hotel booking sites worked out how the Google search algorithm ranked sites and “gamed” Google search.

Today, the experience is very different, the algorithm usually results in the actual hotel website being the top hit on any search for a specific hotel.

Google had worked on the algorithm and ensured what they saw as the correct search result was the one that was at the top.

One thing that many people don’t realise was that Google not only worked on the software behind the algorithm, but that they also use human intervention to check that the algorithm was providing the search results they thought it should be. If you wonder why Google search is better than search functions on your intranet and the VLE this is probably why, Google use people to improve search results. Google uses people to both write the algorithms and to tweak the search results. Using people can result in bias.

laptop

So if we are to use algorithms to manage the next generation of learning environments, the intelligent campus, support for future apprenticeships and data driven learning gains, how can we ensure that we recognise that there is bias? If we do recognise the bias, how do we mitigate it? Using biased algorithms, can we make people aware that any results have a bias, and what it might mean? If we are to, like Google, use human intervention, how is that managed?

The one aspect of Google’s search algorithm that some people find frustrating is that the whole process is secret and closed. No one, apart from the engineers at Google really knows how the algorithms were written and how they work, and what level of human intervention there is.

So if we are to use algorithms to support teaching and learning, could we, by making the algorithms open, ensure that, we remove some of those fears and frustrations people have with a data approach? By making the algorithims open could we ensure that staff and learners could see how they work, how they learn and why they produce the results that do?

Could we go one step further and allow people to edit or even create their own algorithms? Allowing them to make suggestions on how they could be improved, creating new analytics that could benefit the wider community.

Is it time for open analytics?

Thank you to Lawrie Phipps for the conversations we had after the Digital Pedagogy Lab: Prince Edward Island conference and this blog post.

Emerged Technologies

oldtools

Four years is a long time in technology, but how much has happened since 2011?

Back in November 2011 I was asked by the AoC to present at a conference with Donald Taylor on emerging technologies and how FE Colleges should be preparing for them.

My slides and Donald’s are in this slidedeck.

My notes from that presentation are here, but how much has changed since then and had education really embraced and started to embed these emerging technologies.

Continue reading Emerged Technologies