Tag Archives: siri

Record temperatures – Weeknote #177 – 22nd July 2022

This week saw record temperatures as a red warning heatwave hit the UK. I spent the week working from home, as trains were cancelled or delayed and there were problems on the roads.

I wrote a blog post on how I can teach anywhere

I use to say things like “I can teach anywhere”. What I meant by this, wasn’t that the environment or space I was using wasn’t important, but I could overcome the disadvantages of the different spaces I had to play with, and still deliver an effective session.

So though I might be able to teach anywhere the reality is that all those challenges and issues I face in an inappropriate space, may well result in poor quality learning, despite the quality of my teaching.

Big news this week was that the QAA was to step away from designated role in England. Over on Wonkhe, David Kernohan  tries to make sense of it all.

The Quality Assurance Agency (QAA) will no longer consent to be the Designated Quality Body (DQB) in England, as of the end of the current year in office (March 2023). The reasoning is straightforward – the work that QAA does in England, on behalf of the OfS, is no longer compliant with recognised quality standards – namely the European Standards and Guidelines (ESG) as monitored by the European Quality Assurance Register for Higher Education (EQAR). For this reason, the QAA registration with EQAR was recently suspended – a decision that highlights international concerns about procedures in England but has an impact in the many other nations (including Scotland and Wales) where QAA needs that EQAR registration in order to fulfil a statutory quality assurance role.

Once more we are seeing more divergence across the UK for higher education.

Alexa
Image by finnhart from Pixabay

I revisited and revised a blog post on voice assistants I had written back in 2018.

Hey Siri, what’s my day like today? Alexa when’s my next lesson? Okay Google, where are my library books?

Voice assistants have become widespread and are proving useful for a range of uses. The cost has fallen over the years and the services have expanded.

The use of voice assistants and smart hubs has certainly continued, and they have become embedded into many digital ecosystems. Their use in education though is still limited and I will be looking at that in a later blog post.

Attended a session on impact this week, which was interesting, but not necessarily that useful. How do you evidence impact of what you do? I wonder for example of the 1,828 blog posts published on this blog have had any impact on the way in which people work, support others or plan their work. For example one of the most popular blog posts on the blog, which though written in 2011, is still regularly viewed, is this one 100 ways to use a VLE – #89 Embedding a Comic Strip, which was one of a series of blog posts on improving or enhancing the use of the VLE.

One use of graphic that can enhance the look of a VLE course or as a mechanism to engage learners is to embed a comic strip into the VLE course.

What has been the impact of this? Has is changed practice? Has it improved the student experience? Has it improved student outcomes? How would I know?

I don’t think I can evidence the impact of this, but other work I have done I can sometimes see the evidence, however I don’t know if their has been actual impact.

I quite liked these tweets from August 2021 from people who had attended the digital leadership consultancy I had delivered for Leeds.

I had as part of the programme delivered a session on e-mail. It incorporates much of what is in this blog post on Inbox Zero and this follow up post. Always nice to see the impact that your training has had on the way that people work, they didn’t just attend the training, engage with the training, but are now acting on what they saw and learnt.

However what I don’t know is, has the change had a positive impact? And what was that impact?

I spent some of the week reviewing our new guide to the Intelligent Campus, and the revamped guide to the Intelligent Library. The library guide was never published but has been updated for 2022. I also reviewed our updated use cases, as well as drafting plans for some additional use cases. I am aiming for publication of these in the autumn.

letters
Image by Gerhard G. from Pixabay

If you are going on leave over the summer, you may want to look at this blog post on managing your summer e-mail.

My top tweet this week was this one.

Hey Siri, what’s my day like today? Alexa when’s my next lesson? Okay Google, where are my library books?

Microphone
Image by rafabendo from Pixabay

Voice assistants have become widespread and are proving useful for a range of uses. The cost has fallen over the years and the services have expanded.

Google report that 27% of the global online population is using voice search on mobile.

Alexa was announced by Amazon in November 2014 alongside the Echo devices, which act as connected speakers and hubs for voice controlled devices. The Echo devices act as connected hubs complete with speakers and in some cases small screens. Amazon continues to innovate and develop their Alexa devices including car adapters and headphones.

Alexa
Image by finnhart from Pixabay

Cortana from Microsoft was demonstrated in April 2013, and was released as part of Windows 10 in 2015. In March 2021, Microsoft shut down the Cortana apps entirely for iOS and Android and removed them from corresponding app stores. Microsoft has also reduced emphasis on Cortana in Windows 11. Cortana is not used during the new device setup process and isn’t pinned to the taskbar by default.

Bixby from Samsung was announced in March 2017. Unlike other voice assistants Samsung are going to build Bixby into a range of consumers goods such as refrigerators and TVs which they manufacture.

Google have Google Nest, which was originally released as Google Home announced in May 2016 and released in the UK the following year. In May 2019, Google rebranded Google Home devices to the Google Nest banner, and it unveiled the Nest Hub Max, a larger smart display.

Google Home
Image by antonbe from Pixabay

Google Nest speakers enable users to speak voice commands to interact with services through Google’s intelligent personal assistant called Google Assistant.

And of course Siri from Apple. Siri was originally released as a stand-alone application for the iOS operating system in February 2010, but after a buy out from Apple was released as part of the operating system in October 2011. It wasn’t until 2018 that Apple released their own connected speaker hub with the HomePod in February of that year, which was replaced with the HomePod Mini in November 2020.

Many of these voice assistants started their journey on mobile devices, but over the last few years we have seen connected voice controlled hubs appearing on the market.

An online poll in May 2017 found the most widely used in the US were Apple’s Siri (34%), Google Assistant (19%), Amazon Alexa (6%), and Microsoft Cortana (4%).

Though we might think we want to see how we can embed these into the classroom or education, they are not aimed at this market, they are consumer devices aimed at individuals. Our students are certainly the type of consumers who may purchases these devices and they will want to be able to connect them to the university or college services they use.

group
Photo by Annie Spratt on Unsplash

All the voice assistants require some kind of link to information and in some cases data.

If I ask Alexa to play a particular song, she delves not just into my personal music collection on the Amazon Music app but also what is available through my Prime subscription. If the song isn’t available I could either subscribe to Amazon Music streaming service, or purchase the song.The Alexa ecosystem is built around my Amazon account and the services available to me as a Prime subscriber.

With Google Nest I have connected my free Spotify account to it. This is one of the key features of these devices that you can connect services you already subscribe to, so you can control them via voice. Of course the reason I have a free Spotify account is that Google Nest would much prefer I was connected to Google Music, and it certainly won’t let me connect to either my home iTunes library (where virtually all my music is) nor to Amazon Music. So when I ask Google Nest to play a particular music track, she gets annoyed and says that she can’t as that is only available on Spotify Premium.

This is one of the challenges of these devices that they are quite reliant on subscriptions to other services. Apple’s HomePod only really works if you have an Apple Music subscription.

When it comes to connecting services to voice assistants then are two key challenges, can you get the right data out to the right people, and similarly can you do this for the range of voice assistants available especially when you remember that there is no de facto standard for voice assistants.

It would be useful to know and understand what sorts of questions would be asked of these assistants. There are the known problems, such as where is my next lesson? What books would be useful for this topic? When is my tutor free for a quick chat on assignment? Do I need to come into college today? Even simple questions could result in a complicated route to multiple online systems. Imagine asking the question, where and when is my next lecture, what resources are available and are there any relevant books in the library on this subject? The module design or course information system (or more likely this is a dumb document) would have the information on what would be next. Timetabling systems would be able to inform the learner which space and when the lesson was. Imagine the extra layer of last minute changes to the information because of staff sickness, or building work resulting in a room change. As for what resources are available, this may be on the VLE or another platform. As for additional resources then this could be on the library systems. How would the voice assistant know what to do with this information, could it push the links to a mobile device? Add in a social platform, say a closed Facebook group, or a collaborative tool such as Slack, then you start to see how a simple question about what am I doing next and where is it, becomes rather complicated.

student
Image by Karolina Grabowska from Pixabay

There is though something to be said to ensuring services work with voice assistants, as the same data and information could also be used with chatbot interfaces (ie textual assistants) and with campus bound services such as kiosks or web portals. Get the data right then it’s simple a matter of ensuring the interface to either voice, text or screen is working. Learning analytics services rely on a hub where academic and engagement data is collected, stored and processed. Could we use a similar data structure to build the back end system for chatbots, kiosks and voice assistants?

Could we Siri? Could we?

This is an updated version of a blog post I wrote in August 2018 on the Jisc Intelligent Campus project blog.

Hey Siri, are you real?

Hey Siri, are you real?

Following on from my recent blog post about installing voice assistants on campus, I recently read an article, Giving human touch to Alexa or Siri can backfire on how trying to make voice assistants appear to be human or have human touches may not give you the results you were looking for.

A team has found that giving a human touch to chatbots like Apple Siri or Amazon Alexa may actually disappoint users.

Just giving a chatbot human name or adding human-like features to its avatar might not be enough to win over a user if the device fails to maintain a conversational back-and-forth with that person…

This reminded me of a conversation I had at an Intelligent Campus workshop where the idea of trying to make chatbots appear human was probably not a good idea, and maybe they should intentionally make their chatbot non-human.

There are potential challenges as Microsoft found out with their paper clip assistant, but was that because it was a paper clip or because it was annoying?

Clippy

In many ways Clippy was the ancestor of Siri, Cortana and other modern day assistants.

A non-human chatbot could also avoid some of the gender issues that occur when deciding if your chatbot is female or male.

This Guardian article from March discusses this contentious issue, of gender for voice assistants.

Providing assistance has long been considered a woman’s role, whether virtual or physical, fictional or real. The robots that men voice, meanwhile, tend to be in positions of power – often dangerously so. Think Hal 9000, or the Terminator: when a robot needs to be scary, it sounds like a man.

Patriarchy tells us that women serve, while men order, and technology firms seem content to play into stereotypes, rather than risk the potentially jarring results of challenging them.

The article talks about EqualAI

EqualAI, an initiative dedicated to correcting gender bias in AI, has backed the creation of Q, what it says is the first genderless voice.

So if you do have a non-human chatbot, if you want to extend it to be a voice assistant, at least soon you will be able to have a genderless voice behind it.

So what (rather than who) should be your chatbot? Well it could be an anthropomorphic animal or maybe something else that is special to your university or college.

So what would your chatbot be?

Hey Siri, how am I doing on my course?

Of course if you ask Siri today, her reply is, well it depends…

siri002

Another response to the same question…

siri001

As for tomorrow…. well could it be different?

However for other things intelligent assistants such as Siri, Cortana and Google Now can be a really useful mechanism for finding things out, things you need to do and in many instances fun stuff too.

I have been wondering if we could utilise these technologies and apply it to learning.

We know that most (if not all) people can usually tell when using a chat function on a website when they are talking to a chat bot and when they are talking to a real person.

Could we just not even try and think that the intelligent agent is real and actually play up to the fact it is artificial. Give it a name and even create a personality for it?

This way we could avoid some of those ethical issues about a piece of software trying to pretend to be human. The students know it is not real and it is advertised accordingly.

But could such a tool be an useful thing for students to have access to? One that could learn from its experiences and fine tune it’s responses and interventions?

It doesn’t necessarily have to be voice controlled, users could use text entries.

So what kind of questions could learners ask?

What is the best book for this module?

Where is a quiet place to study on campus?

What other questions could we expect learners to be asking?

Emerged Technologies

oldtools

Four years is a long time in technology, but how much has happened since 2011?

Back in November 2011 I was asked by the AoC to present at a conference with Donald Taylor on emerging technologies and how FE Colleges should be preparing for them.

My slides and Donald’s are in this slidedeck.

My notes from that presentation are here, but how much has changed since then and had education really embraced and started to embed these emerging technologies.

Continue reading Emerged Technologies