Do you remember when the Google algorithm wasn’t that good, well it was good, but today it’s better!
Many years ago if you searched for a hotel on Google, so you could find out if there was car parking, or to find the website for the restaurant menu, the search results most of the time were not the hotel website, but hotel booking sites offering cheap hotel rooms. Pointless if you already had a room, and all you wanted to know if you had to pay for car parking, or what time you could check out. The problem was that the hotel booking sites worked out how the Google search algorithm ranked sites and “gamed” Google search.
Today, the experience is very different, the algorithm usually results in the actual hotel website being the top hit on any search for a specific hotel.
Google had worked on the algorithm and ensured what they saw as the correct search result was the one that was at the top.
One thing that many people don’t realise was that Google not only worked on the software behind the algorithm, but that they also use human intervention to check that the algorithm was providing the search results they thought it should be. If you wonder why Google search is better than search functions on your intranet and the VLE this is probably why, Google use people to improve search results. Google uses people to both write the algorithms and to tweak the search results. Using people can result in bias.
So if we are to use algorithms to manage the next generation of learning environments, the intelligent campus, support for future apprenticeships and data driven learning gains, how can we ensure that we recognise that there is bias? If we do recognise the bias, how do we mitigate it? Using biased algorithms, can we make people aware that any results have a bias, and what it might mean? If we are to, like Google, use human intervention, how is that managed?
The one aspect of Google’s search algorithm that some people find frustrating is that the whole process is secret and closed. No one, apart from the engineers at Google really knows how the algorithms were written and how they work, and what level of human intervention there is.
So if we are to use algorithms to support teaching and learning, could we, by making the algorithms open, ensure that, we remove some of those fears and frustrations people have with a data approach? By making the algorithims open could we ensure that staff and learners could see how they work, how they learn and why they produce the results that do?
Could we go one step further and allow people to edit or even create their own algorithms? Allowing them to make suggestions on how they could be improved, creating new analytics that could benefit the wider community.
Is it time for open analytics?
Thank you to Lawrie Phipps for the conversations we had after the Digital Pedagogy Lab: Prince Edward Island conference and this blog post.