A surge in anxiety and stress is sweeping UK campuses. What is troubling students, and is it the universities’ job to fix it?
We know that there is a student mental health crisis and the reasons for this aren’t necessarily clear. We know there has been increase in the demand for mental health services at universities. The article notes that there has been research into the causes of this, but lays the blame for the crisis on the way in which universities are managed and run, leading to students not being in control of what they do and saddled with debt.
Monday was another trip to London, I had been expecting to participate in a workshop, but this was cancelled late last week, and I already had train tickets and another meeting in the diary so decided to head up anyhow. The weather was changeable, raining whilst on the train, but this cleared up by the time I arrived in London.
I saw this link in my news feed and it did make me think more about how we could use AI to support learning, but also reflect on some of the real challenges in making this happen. Also do we want this to happen!
In the afternoon in the office we were discussing Education 4.0 and how we are going to move this forward in terms of expert thinking and messages.
Tuesday was a busy day, first a meeting in the Bristol office, before heading up to Cheltenham for a meeting the HESA office.
I haven’t been on a CrossCountry train for a while now, so travelling to Cheltenham Spa from Bristol Temple Meads I was interested to see how the 3G connectivity issues I’ve always had on that route would be like, especially as I now have 4G with Three. Well same old problems, dipping in and out from 4G to 3G as well as periods of No Service. I would like to blame the train, but the reality is that there is poor phone signal connectivity on that route. As there is no incentive for mobile network providers to improve connectivity.
If I do go to Cheltenham again, I think I will take a book!
We were discussed the Data Matters 2020 Conference, which is now in my portfolio. Still a work in progress and the proposal needs to be signed off by key stakeholders.
Whilst I was in Cheltenham I bumped into my old colleague Deborah from Gloucestershire College and we had a chat about stuff. What was nice to hear was the number of my team and colleagues in that team that had started there in learning technology and were now doing new and more exciting jobs at universities across the UK.
Wednesday there was rain. I spent today preparing for a meeting in the afternoon and tidying up my inbox. Though I did find time for a coffee.
Thanks to Lawrie for the link, I read this report on the iPASS system, which uses data and analytics to identify students at risk.
The three institutions increased the emphasis on providing timely support, boosted their use of advising technologies, and used administrative and communication strategies to increase student contact with advisers.
This report shows that the enhancements generally produced only a modestly different experience for students in the program group compared with students in the control group, although at one college, the enhancements did substantially increase the number of students who had contact with an adviser. Consequently, it is not surprising that the enhancements have so far had no discernible positive effects on students’ academic performance.
Looks like that it didn’t have the impact that they thought it might.
In a couple of weeks I am recording a podcast and met with the organiser today to discuss content and format. Without giving too much away, we will be covering the importance of people in any digital transformation programme and ensuring that they are part of the process, consultation and are given appropriate training in the wider context of their overall skills and capabilities. You can’t just give people new digital systems and expect them to be able to use them from day one or with specific training. Familiarity with digital in its wider context is often critical, but is equally often forgotten.
Whilst writing a blog post about online learning I wrote the following
Conversations are really hard to follow in e-mail, mainly as people don’t respond in a linear manner, they add their comment to the top of their reply.
When I first started using e-mail in 1997, well actually I first started using e-mail in 1987, but then got flamed by the e-mail administrator at Brunel University, so stopped using it for ten years….
When I re-started using e-mail in 1997, there was an expectation when replying to e-mail that you would respond by writing your reply underneath the original e-mail, bottom posting, which really was something that I got from using usenet newsgroups. This from RFC 1855.
If you are sending a reply to a message or a posting be sure you summarize the original at the top of the message, or include just enough text of the original to give a context. This will make sure readers understand when they start to read your response. Since NetNews, especially, is proliferated by distributing the postings from one host to another, it is possible to see a response to a message before seeing the original. Giving context helps everyone. But do not include the entire original!
By the early 2000s lots more people were using e-mail and most of the time they were replying at the start of the e-mail, top-posting. There were quite a few people in my circles who continued to bottom post their replies, which made sense when reading a threaded conversation, but confused the hell out of people who didn’t understand why someone replied to a conversation, and from what they could see, hadn’t written anything!
Today top-posting appears to be the norm and I can’t recall when I last saw someone responding to an e-mail by replying at the end of the quoted reply.
Here is the blog post I wrote, about how online learning doesn’t just happen.
Friday was about planning, planning and even some forward planning. One thing that has puzzled me for a long time was the difference between forward planning and planning. Thanks to Google I have a better idea now.
Forward planning is being pro-active, predicting the future and then planning to achieve that prediction.
The opposite is backward planning, which is more reactive, you wait until you get a request or management decision then create a plan to achieve it.
So what is plain and simple planning then?
Wikipedia says that planning is the process of thinking about the activities required to achieve a desired goal.
So some of what I am doing in my planning is responding to both requested goals and planning for some predicted goals.
We had our weekly meeting about the Technical Career Pathways we are developing at Jisc. I am responsible for the Learning and Research Technical Career Pathway.
An interesting read on how China is using it’s technological might (and economies of scale) to utilise AI to respond to demand for tutoring to catch up and for the college entrance exam, the gaokao.
The article says that three factors have driven AI education, firstly supported by tax breaks, a $1bn investment. Second intense demand for tutoring and people (read parents) willing to pay for it. Thirdly masses of data to refine algorithms and a population not as concerned about data privacy as we are in the West.
This AI teaching model is gaining traction in China, could the UK do something similar? Well we certainly couldn’t match the investment, people don’t like paying for stuff and we have big ethical concerns about data and privacy.
Could we learn from China?
Well one aspect of the tutoring system is how it adapts to the needs of the learner and as a result personalises the learning that is made available to the learner. Could we use a similar system to support learners in their learning journey? Not necessarily fulfilling the entire learning journey, but aspects and parts of the journey.
Well the system doesn’t necessarily remove the human function within learning and teaching, in the same way that books didn’t and in reality neither did the VLE. What it does do is free up time for teachers to focus on those aspects of learning which technology and AI can’t do well or not at all.
I don’t see AI systems replacing teachers, in the same way that text books, workbooks, don’t replace teachers. What I can see is how such systems could enhance the learning journey. Rather than a blanket list of resources and links, more focused and personalised approach to meeting the learner needs.
I can see a system also working intelligently to understand the context the learner is learning in. Are they travelling or on campus, or at home? What is their connectivity like? Are they on their own, with peers from their cohort? What assessments do they have coming up? Have they been working for a while, do they need a break?
Before we get to that, there is a lot of work to be done on how we measure these aspects of learner context. Importantly we also need to consider the ethical aspects of any such system.
People are not fearful of algorithms, they’re fearful of agendas that the algorithms represent.
Over the last few weeks I have been discussing and listening to people’s views on the intelligent campus.
One topic which has resulted in a fair bit of controversy is the concept of using artificial intelligence to support teaching and learning. This isn’t some kind of HAL 9000 computer running the campus and refusing to open the library doors until Dave the learner has finished their essay. This is more about a campus system being able to learn from the users, take that data, do some analysis and make suggestions to the user on potential ideas for improvement and useful interventions.
Imagine a learner arriving at campus with the intention of writing an essay, needing a quiet place in which to do this. They check their Campus App on their smartphone and it recommends a location based on the ambient noise levels and the type of environment the learner has used before. It could take into account the distance from the coffee shop, depending on if coffee is used as a distraction or supports the learner in writing their essay. The learner can of course ignore all this and just go to where they want to, the app provides informed guidance and learns as the learner does more learning activities and which spaces they use.
Another scenario, is a teacher planning a session, with some relatively interactive and engaging learning activities. They ask the intelligent campus where is the best place for this to happen! The system takes on board the preferences of the teacher, the availability of rooms, information from previously successful similar sessions and any feedback from learners. The teacher can then make an informed choice about the best space for this session. After the learning, the system asks for feedback so that it can learn from and improve the decisions it makes.
I think some of the issues (or should we call them problems and fears) that people have with a concept such as this is they feel any such algorithm is secret and hidden and will have a built in bias.
So if we are to use algorithms to manage the next generation of learning environments, the intelligent campus, support for future apprenticeships and data driven learning gains, how can we ensure that we recognise that there is bias? If we do recognise the bias, how do we mitigate it? Using biased algorithms, can we make people aware that any results have a bias, and what it might mean?
People are not fearful of algorithms, they’re fearful of agendas that the algorithms represent. But if we make these algorithms open and accessible, we could mitigate some of those concerns.
So if we are to use algorithms to support teaching and learning, could we, by making the algorithms open, ensure that, we remove some of those fears and frustrations people have with a data approach? By making the algorithms open could we ensure that staff and learners could see how they work, how they learn and why they produce the results that do?
This does bring up another issue that people have mentioned which is the bias any algorithm has, bias which comes from the people who write it, sometimes consciously, sometimes unconsciously. There is an assumption that these algorithms are static and unchanging and written by people who have an agenda. As we know from using Google and other algorithms, these are constantly changing and being tweaked.
Could we go one step further and allow people to edit or even create their own algorithms? Allowing them to make suggestions on how they could be improved, creating new analytics that could benefit the wider community.
We need to embrace the benefits of a smart campus, because the technology is already here, but we need one which learns from the people who use it; we need to ensure that those people are the ones who inform and guide the development of that learning. They are able to and can decide which intelligent campus decisions to benefit from and which they can ignore. By making the whole process open and accessible, we can provide confidence in the decision making, we can feel we can trust those decisions. We mustn’t forget that giving them that literacy in this area, is perhaps the most important thing of all.
Remember that in both scenarios above the learner and teacher both, ultimately, have the decision to ignore the intelligent campus decisions, they can decide themselves to close the pod bay doors.
news and views on e-learning, TEL and learning stuff in general…