Category Archives: audio

Hey Siri, what’s my day like today? Alexa when’s my next lesson? Okay Google, where are my library books?

Microphone
Image by rafabendo from Pixabay

Voice assistants have become widespread and are proving useful for a range of uses. The cost has fallen over the years and the services have expanded.

Google report that 27% of the global online population is using voice search on mobile.

Alexa was announced by Amazon in November 2014 alongside the Echo devices, which act as connected speakers and hubs for voice controlled devices. The Echo devices act as connected hubs complete with speakers and in some cases small screens. Amazon continues to innovate and develop their Alexa devices including car adapters and headphones.

Alexa
Image by finnhart from Pixabay

Cortana from Microsoft was demonstrated in April 2013, and was released as part of Windows 10 in 2015. In March 2021, Microsoft shut down the Cortana apps entirely for iOS and Android and removed them from corresponding app stores. Microsoft has also reduced emphasis on Cortana in Windows 11. Cortana is not used during the new device setup process and isn’t pinned to the taskbar by default.

Bixby from Samsung was announced in March 2017. Unlike other voice assistants Samsung are going to build Bixby into a range of consumers goods such as refrigerators and TVs which they manufacture.

Google have Google Nest, which was originally released as Google Home announced in May 2016 and released in the UK the following year. In May 2019, Google rebranded Google Home devices to the Google Nest banner, and it unveiled the Nest Hub Max, a larger smart display.

Google Home
Image by antonbe from Pixabay

Google Nest speakers enable users to speak voice commands to interact with services through Google’s intelligent personal assistant called Google Assistant.

And of course Siri from Apple. Siri was originally released as a stand-alone application for the iOS operating system in February 2010, but after a buy out from Apple was released as part of the operating system in October 2011. It wasn’t until 2018 that Apple released their own connected speaker hub with the HomePod in February of that year, which was replaced with the HomePod Mini in November 2020.

Many of these voice assistants started their journey on mobile devices, but over the last few years we have seen connected voice controlled hubs appearing on the market.

An online poll in May 2017 found the most widely used in the US were Apple’s Siri (34%), Google Assistant (19%), Amazon Alexa (6%), and Microsoft Cortana (4%).

Though we might think we want to see how we can embed these into the classroom or education, they are not aimed at this market, they are consumer devices aimed at individuals. Our students are certainly the type of consumers who may purchases these devices and they will want to be able to connect them to the university or college services they use.

group
Photo by Annie Spratt on Unsplash

All the voice assistants require some kind of link to information and in some cases data.

If I ask Alexa to play a particular song, she delves not just into my personal music collection on the Amazon Music app but also what is available through my Prime subscription. If the song isn’t available I could either subscribe to Amazon Music streaming service, or purchase the song.The Alexa ecosystem is built around my Amazon account and the services available to me as a Prime subscriber.

With Google Nest I have connected my free Spotify account to it. This is one of the key features of these devices that you can connect services you already subscribe to, so you can control them via voice. Of course the reason I have a free Spotify account is that Google Nest would much prefer I was connected to Google Music, and it certainly won’t let me connect to either my home iTunes library (where virtually all my music is) nor to Amazon Music. So when I ask Google Nest to play a particular music track, she gets annoyed and says that she can’t as that is only available on Spotify Premium.

This is one of the challenges of these devices that they are quite reliant on subscriptions to other services. Apple’s HomePod only really works if you have an Apple Music subscription.

When it comes to connecting services to voice assistants then are two key challenges, can you get the right data out to the right people, and similarly can you do this for the range of voice assistants available especially when you remember that there is no de facto standard for voice assistants.

It would be useful to know and understand what sorts of questions would be asked of these assistants. There are the known problems, such as where is my next lesson? What books would be useful for this topic? When is my tutor free for a quick chat on assignment? Do I need to come into college today? Even simple questions could result in a complicated route to multiple online systems. Imagine asking the question, where and when is my next lecture, what resources are available and are there any relevant books in the library on this subject? The module design or course information system (or more likely this is a dumb document) would have the information on what would be next. Timetabling systems would be able to inform the learner which space and when the lesson was. Imagine the extra layer of last minute changes to the information because of staff sickness, or building work resulting in a room change. As for what resources are available, this may be on the VLE or another platform. As for additional resources then this could be on the library systems. How would the voice assistant know what to do with this information, could it push the links to a mobile device? Add in a social platform, say a closed Facebook group, or a collaborative tool such as Slack, then you start to see how a simple question about what am I doing next and where is it, becomes rather complicated.

student
Image by Karolina Grabowska from Pixabay

There is though something to be said to ensuring services work with voice assistants, as the same data and information could also be used with chatbot interfaces (ie textual assistants) and with campus bound services such as kiosks or web portals. Get the data right then it’s simple a matter of ensuring the interface to either voice, text or screen is working. Learning analytics services rely on a hub where academic and engagement data is collected, stored and processed. Could we use a similar data structure to build the back end system for chatbots, kiosks and voice assistants?

Could we Siri? Could we?

This is an updated version of a blog post I wrote in August 2018 on the Jisc Intelligent Campus project blog.

The VLE is not dead – Weeknote #167 – 13th May 2022

Image by drippycat from Pixabay

Monday morning, I was off to Queen Mary University of London for their VLE Expo. This was very much a QMUL focussed event, though they had invited a range of VLE vendors. I liked how the focus of the event was about, what do we want to do to achieve our strategic aspirations, how will the VLE help us to do that, and which platform (or platforms) will enable us to do that.

There were some excellent presentations from the academic staff on the different ways in which they were using technology including virtual reality, mixed reality and H5P. I sat on the final panel session answering questions from the floor on a range of issues. A lot of the questions were more about the use of technology for learning and teaching, than VLE specific topics. However, I did get into a few discussions about the VLE on the Twitter as a result of attending the event.

I posted another blog post in my Lost in Translation series this time with a focus on the technical aspects of recording videos or audio files.

Most institutions will (probably) have equipment which staff can use, but if there is a strategic approach to building a sustainable approach to the use of video and audio, then universities will need to reflect if they have sufficient resources to support the increased demand for cameras and microphones.

video recording
Image by StockSnap from Pixabay

Tuesday I was still in London for a briefing session, well as it happened it got cancelled, so I worked in the office.

Apple have announced that they are going to stop selling the iPod once the current stocks of iPod touch run out. So did you have an iPod and if so which one?

iPod
Photo by Cartoons Plural on Unsplash

Wednesday, I did two all-staff briefings for two directorates on the Jisc HE sector strategy. From the feedback I got they seemed to be well received.

I was reminded on the Twitter about when I took my bike to work. I made a video back then.

Mike Sharples posted an excellent Twitter thread on how AI can be used to write essays. I agree with Mike, if we are setting students assignments that can be answered by AI, are we really helping students learn?

I enjoyed the #LTHEchat on images in presentations in the evening.

These two blog posts from 2005 (and 2007) were very influential on my presentation style: Gates, Jobs, & the Zen aesthetic and Learning from Bill Gates & Steve Jobs. I also posted  a link to a presentation from an internal TEDx event about delivering presentations – A duck goes quack.

Thursday, I made my way to Harwell for a drop in session I was running at the Jisc offices there, alas an accident the closure of the M4 meant I spent nearly four hours sitting the car rather than sitting in a room talking to Jisc staff. In the end I had to abandon my visit to the office.

Friday, I had a scoping call about learning spaces in higher education. Interested in the kinds of learning spaces higher education is using, flexibility, technology and the kinds of activities spaces are being used for.

I found this WonkHE article interesting – Learning design is the key to assuring the quality of modular provision in which Nick Mount talks about building quality assurance into the design of modular programmes and micro-credentials.

Traditional providers can expect to find themselves facing the difficult job of rethinking existing assurance processes that are designed for coherent, longitudinal programmes of study, so that they can accommodate a new pick-and-mix landscape of highly portable and stackable micro-credential learning.

My top tweet this week was this one.

Lost in translation: cameras and microphones

video recording
Image by StockSnap from Pixabay

As part of my work in looking at the challenges in delivering teaching remotely during this crisis period I have been reflecting on how teaching staff can translate their existing practice into new models of delivery that could result in better learning, but also have less of detrimental impact on staff an students.

One of the things we noticed when the pandemic struck and lockdown happened, was as the education sector moved rapidly to remote delivery was the different models that people used. However what we did see was many people were translating their usual practice to an online version.

In my post on translating the lecture I discussed the challenges of translating your 60minute lecture into a 60 minute online video presentation. 

There are some problems with this as you are not providing an online video version of the lecture. You are using a platform like Teams or Zoom to deliver the lecture via a webcam. You will not be able to “read” the room as you can in a face to face environment. Video presentations also lose much of the energy that a physical presentation has. It can flatten the experience and people will disengage quite rapidly.

In a couple of posts in this series I discussed how you could reflect on the format of the lecture by looking at how content is produced and delivered for television and radio. 

One aspect I didn’t discuss in too much detail was the technical aspects of recording videos or audio files.

webcam
Photo by Waldemar Brandt on Unsplash

Back in the day, most laptops didn’t have webcams, and I remember buying external iSight cameras to use with my G5 Power Mac. Today you would be hard pressed to buy a laptop without a built-in webcam, the iPad comes with two cameras (front and back). It’s the same with microphones, the G5 Power Mac had an audio-in mini-jack for an external microphone, though I went out and got a USB Blue Snowball.

So today most people using a computer will have the technical capability to record video and audio easily. However there is more to creating high quality content than the ability to turn on a webcam or speak into the laptop microphone. These tools are fine for video conferencing, but aren’t necessity ideal for creating videos or audio recordings.

microphone
Image by StockSnap from Pixabay

Using external cameras and microphones is one way in which to enable better quality recordings than using the built in hardware on your laptop.

During the pandemic lockdowns, using your laptop was acceptable. Moving forward and creating new recordings, it makes sense to have better equipment. It’s not just about cameras, but also decent microphones for those cameras.

Most institutions will (probably) have equipment which staff can use, but if there is a strategic approach to building a sustainable approach to the use of video and audio, then universities will need to reflect if they have sufficient resources to support the increased demand for cameras and microphones.

video recording
Image by Pexels from Pixabay

Going forward maybe having decent cameras and microphones will be the staple of academic kit, in the same way that laptops are now provided.

In a future post I will talk about creating an ideal environment for recording television style and radio content.

The future of learning… ten years later!

FOTE09

On the 2nd October 2009 I was at the ULCC Event, The Future of Technology in Education.

Little did I know the impact that this presentation would have on me, my future career and education in general.

I felt a little intimidated to be invited to talk at the event, we wouldn’t have called it imposter syndrome back then, but I did wonder if I was the right person to talk at such an interesting conference. It certainly had a TED talk feel to it. I must thank Frank Steiner and Tim Bush from ULCC for their support and help and inviting me to talk at this FOTE and future FOTE events.

2009 was quite a year for me, I had won the ALT Learning Technologist of the Year award that year. It was also the year of “The VLE is Dead” debate at the ALT Conference.

The event took place at the Royal Geographical Society in Kensington, which I remember wasn’t the easiest place to get to via the underground. Knowing London better now I think I would probably have just walked across Hyde Park from Paddington to get there. From about 2001 I started going to London a lot for work, well a few times a year, which was considerably more than when I was a lecturer in Bristol. I use to go to London, arrive at Paddington, take the underground, pop up somewhere, go to a meeting or an event, before popping back down into the underground on my way home. These days I visit London a lot more and have spent a lot more time walking around London, so have a much better grasp of the geography of the place. I remember being quite impressed with the place, and that you could see the nearby Albert Hall.

Albert Hall

I spent a fair bit of time putting my presentation together, in the end it comprised 82 slides… and I only had twenty minutes to deliver my talk. A challenge that took some doing.

My presentation was entitled The future of learning… The aim of my presentation was to discuss how learning would and could change with the affordances of technological change.

So what of my predictions?

Well we know predicting the future is hard and generally most people get it wrong.

You will no doubt not be surprised that I got a lot of things wrong…

One thing I feel I did get right was that mobile was going to be big and important. I said how I felt mobile was the future. The audience did have a range of mobile devices themselves, but most phones were nothing more than phones that could do SMS and the Snake game. There were a few smartphones out there, but if my experience was to go by, they were clunky and difficult to use. We had the iPhone, but it hadn’t quite had the impact that it has had by today.

We didn’t have the iPad, that would arrive the following year. So no surprise that in my talk at FOTE I didn’t mention tablets

My talk actually started off talking about the past, how we are still impacted and embedded by the past, which makes change challenging and difficult.

I then talked about the present and some of the issues and problems that technology was causing in classrooms and lecture theatres. PAT testing was a real concern for many back then, don’t hear much about it these days in relation to BYOD or learner devices.

One of the challenges I saw back then was how academics and educationalists wanted to categorise learning, so we had e-learning, m-learning, mobile learning, online learning, digital learning, etc….

I said that I thought categorising learning and putting it into different boxes was restricting and that really we should focus on learning and blur the boxes, blur the boundaries.

Boxes

It was fine to talk about the “boxes” at conferences and in papers, but experience has shown that categorising learning into boxes caused confusion for teachers and academics, who rightly focussed on the word before the learning as a problem to be solved and then found it challenging.

However back then I said, and I still stand by this today, is that learners and academics need to understand the potential of technology and digital to better understand the affordances and opportunities that it can provide for learning. You don’t need to be ab le to do the technology, but you do need to know what it can do.

I also brought in scepticism about technological advances, something I would draw upon in future talks and presentations.

Nokia N95

Video (and film) had been used for learning for years, but people were sceptical and convinced that video (ie lecture capture) would stop traditional learning activities. However we know that television didn’t destroy radio, we know that radio didn’t kill newspaper, books didn’t replace folk stories. When we have a new technological development, often the result is a negative impact on existing technologies, but often the result is affordances about the potential of the new technology, enabling access that otherwise wouldn’t be possible.

I also talked about the potential of video on mobile devices. Video cameras were getting smaller and cheaper, the quality was getting better as well. You could buy video cameras which could record HD video, even if it was a challenge to capture and edit it on standard computers of the time. This was before the concept of streaming became mainstream. I showed a Sanyo Xacti camera which was waterproof and dropped it in a jug of water. These cameras could be used in dirty and dusty environments and the washed under the tap!

James Clay presenting at FOTE09

Mobile phone video has become so much better now. I am still impressed that my iPhone can record 4K video… If only we could get people to record video in landscape!

GPS was usually an option on devices back then, today it is more prevalent in the devices we buy. I saw this as an opportunity, the concept of geo-location based learning was something that felt quite magical at the time. Your device knows where you are, so personalises the learning based on your location. What I missed was how location tracking and would become a very big issue for people.

There was a bit of a backlash against e-Books back in 2009, as people felt that they weren’t as good as “real” books. For me they weren’t a replacement for books, they enabled different ways of reading. For many e-Books and e-book readers enabled a new way to access books and content, that otherwise would mean they wouldn’t have access. I presented on the future of reading at #FOTE10 the following year. I became a bit of an expert on e-books as as result. I presented on e-books at many different events and conferences, as well as writing a chapter in a book, and finally a book on Preparing for Effective Adoption and Use of Ebooks in Education in 2012.

Today e-books are part and parcel off education with easier access to books by students from academic libraries. As I did predict, we didn’t see the end of physical books, we still have bookstores and people still buy physical books.

reading a Kindle
Image by Pexels from Pixabay

Back then in 2009 connectivity was either slightly haphazard, or expensive, or both. We had 3G, but it wasn’t widespread, it would be another three years before we saw 4G.

WiFi was there, but it didn’t always work and network congestion would often cause the WiFi to fail. This happened with frequent regularity at events and conferences I attended back then, as delegates killed the WiFi with too many connections.

In the future I felt connectivity wouldn’t just be important, it would be critical for the future of learning.

Today we have really good (and cheap) mobile data, 4G is more available and 5G is starting to appear. Ubiquitous WiFi is certainly there compared to ten years ago, Eduroam has made it easier for people in education to connect when travelling, but WiFi is easily found in most places. This has allowed users to do so much more when travelling and moving about, or just when drinking coffee. I certainly notice how many people are streaming video, having video chat, doing so much more, because they had the connection and the bandwidth to do so.

Mobile often means battery power, and access to charging. Everyone remembers how their Nokia phone would last days on a single charge, today, most people seem to complain how their smartphone battery doesn’t last the day. Batteries may not seem to have got better, they have, just that we demand more power for our complex devices. We have seen significant improvements in battery technology, but we have seen a huge increase in our demand for power on our devices. Streaming video requires more power than reading an e-mail. One thing that has taken time to filter through was the importance of the ability to charge devices. Since 2009 we have seen trains and buses adding power sockets, and USB ports for charging as well. Hotels have added similar sockets. Some lecture theatres now have plug sockets as well.

In my 2009 presentation I talked about the technological penknife.

Image by Karolina Grabowska from Pixabay
Image by Karolina Grabowska from Pixabay

This is one thing I got very wrong, I thought that the idea that a device that did everything meant it did everything badly. A penknife has multiple tools, but most of them aren’t very good doing the stuff they are designed to do. People would prefer to have specialist devices for specific activities. Why would you have rubbish video from a phone, when you could have a decent HD video camera? Why would you use the rubbish microphone on a device, when a specialist recording device would do it so much better? Well that didn’t happen, in reality we have seen devices become so much better that we don’t need to have multiple devices. We have the penknife, but it’s a really good penknife, really good at everything.

I then went on to talk about change and the importance of managing change. I talked about how change can be a series of small steps, but noted the importance of missing steps, endless steps and steps that trip you up.

These slides were really where I started to understand strategy and writing strategies much more. This certainly helped me in future roles and influenced heavily the design of certain aspects of the Jisc Digital Leaders Programme in which I was part of the research and development team led by Lawrie Phipps.

I talked about activity, technology should never be about the technology, it needed to be about how it could enhance or improve activities. Or where the affordances created new opportunities for different activities. We still have a perception that we shouldn’t talk about technology first, though sometimes I think we should.

Technology allow for flexibility, flexible curriculum, flexible approaches to delivery, flexible learning. I think we have made a little progress here, but so much more is possible these days. The technology enables flexibility, but that doesn’t mean it will just happen, there is so much more that needs to happen to enable flexibility.

Back then I felt sharing was important, not just sharing content (as in open) but also sharing ideas, concepts and approaches. Not that this didn’t happen, but it was difficult to do so. Today it is much easier to share than it was back then, so much so, I think we have forgotten about the time when this didn’t happen.

I talked about the importance of working collaboratively. Since the talk online tools have made it so much easier to collaborate. Collaboration across institutions (and countries) is so much easier these days. Tools such as Slack enable groups to talk and work together.

I talked about innovation, celebrating ideas. Innovation doesn’t always mean better, it means different or new. Following on from that I talked about experimentation and encouraging it within our institutions.

If you want innovation, then it needs to be embedded into the strategy, rewarded and not penalised when things go wrong. It needs to be done in collaboration with learners not done to them. I think we are seeing much more innovation and collaboration these days, and the student voice is helping to inform developments and ideas.

I said we need to re-think assessment, technology was going to have an impact. I think it has, but not in the way we thought it would. We try and use technology to “fix’ assessment today, rather than re-imagine how we assess.

I talked about culture and how culture can enable change, but also frustrate it. Culture is about what and who we are, it’s the sum of the people within an organisation. This was something we covered years later in the Jisc Digital Leaders Programme.

I have written about the importance of culture and strategy in this blog post on writing strategies.

I have always seen technology as a solution to a problem. Technology in itself is not the problem needing to be solved. This was something that I wrote about in 2018.

I finished the presentation about talking about the future and how the future was about the learner, the student. It was about how they wanted to learn, where they wanted to learn, what they wanted to learn and with whom they wanted to learn. Why did we need to think about the future, it was because we needed to think about the learners, then, now and in the future.

So did I predict the future?

No.

It certainly though had a huge impact on my future, some of which I have outlined above. As a result of this talk I was invited to speak at a range of events and conferences on the future of learning and a range of mobile learning events. I spoke the following year at FOTE 10 about the future of reading, which resulted in me doing much more in the e-book space.

So there is also a video of me (looking much younger) presenting, if you want to watch what happened…

Which video and audio standard should I use?

People like standardisation it makes life easier, it is easier to provide advice and guidance and removes barriers. The implication is that if everyone used the same device, the same formats and the same delivery mechanisms then it would be easier to deliver video and audio to learners and for those learners to create audio and video themselves.

Audio is a little easier to standardise, most people have heard of mp3 and most devices can play mp3 files. That should make it much easier to roll out, however due to the fact there is a patent behind mp3 and licensing fees that need to be paid, means that some devices though can play mp3s can not record direct to mp3. These devices will use other audio codecs that play fine on the device in question, but not necessarily on other devices or through a browser. The solution in Gloucestershire College was to standardise on the Edirol R09H which records natively to mp3 onto an SD memory card. Yes it is expensive, but it does record to mp3 and the quality is excellent. Use of SD cards meant that it was very easy to transfer recordings to a laptop or computer and then share them on the VLE. The quality of the audio recordings were excellent.

Of course though the mp3 standard is ubiquitous, recording to mp3 is not necessarily the best format to use, especially if you are going to do any kind of editing on the recording. If for example you are going to be using Garageband to edit the recording and create a podcast, then you are not going to want to use the mp3 format. The reason is that an mp3 file is compressed and uses what is called a lossy compression, in other words information is lost when the file is compressed. If you edit and then compress again, more information is lost. As a result you are compressing a compressed format and you will lose even more quality. This means you can get a less than satisfactory recording. For those who prefer a higher quality the Edirol R09H can also record direct to uncompressed WAV. This is CD quality and doesn’t use any lossy compression, so no loss of quality when editing the file. Of course the problem with WAV is that the file sizes are large so distribution is a problem, so the final edited audio file can then be compressed to mp3.

It’s one thing to record audio, delivery is something else. Placing the audio file on the VLE makes it very easy for learners to access and download the recording in their browser. However for regular recordings it makes more sense to use RSS or podcast the recordings. This will allow learners to subscribe to the series of recordings through software such as iTunes and then transfer the recordings to a portable device such as an iPhone or iPod. The challenge here is not just technical, but also the recognition from practitioners of the importance of a regular series of recordings or podcasts.

If audio is difficult video is much more challenging. Different cameras and devices record video using different formats and even when they use the same format, they may use a different codec. Modern operating systems generally have few problems using MP4 video files, both Windows 7 Media Player and OS X QuickTime X can play modern MP4 video files. Many mobile devices and smartphones can play MP4 files too. However one of the challenges facing FE Colleges is that many are still using Windows XP which doesn’t natively support MP4 and isn’t from a networking perspective the easiest to add that functionality.

One solution is to upload the video to sites such as Vimeo or YouTube and use Flash. Flash is often the solution to the delivery of video on networks where native MP4 playback isn’t possible. The issue with Flash video is how do you deliver such video on mobile devices, many of which don’t have support for Flash? Also how do you deliver the video to mobile devices that are not network connected? Another issue is privacy, the problem with using sites such as YouTube is that they are public and it may not be possible or wanted that the videos are public.

A key technological challenge for institutions is to answer all those problems without it becoming a barrier to learners and practitioners.

100 ways to use a VLE – #75 Streaming live audio

In the past if you wanted to “broadcast” live audio to learners you basically had to be the BBC or use a CB radio. The challenge was that the learner who was listening needed a receiver at their end.

The internet now makes it much easier to broadcast live audio using tools such as Nicecast.

Nicecast is the easiest way to broadcast music from your Mac. Broadcast to listeners around the world.

The thing about Nicecast is that the stream is a simple URL. For learners this is not always that simple, for example they may lose, delete or be unable to find the e-mail that contains the URL. If printed on a handout may contain an error, or the learner may type it into their browser wrong.

Putting the URL, or even better embedding the live audio feed into the VLE will ensure that learners can not only find the feed, but also listen to it whenever the practitioner broadcasts. To be honest you probably want to e-mail the URL anyhow and any other communication channels that the learners use, Twitter for example.

The VLE is only really one medium and reliance on any one medium is not good practice. However though if the VLE is used regularly for posting these kinds of feeds and links, then the learners will become familiar with going to the VLE to access them.

Using Audio more

Teaching has always had an oral tradition, teachers have spoken, learners have listened, conversations take place and learning happens.

I suspect in most classrooms, lecture theatres and seminar rooms across the UK (and the world) there are people speaking and people listening.

Though at this time I don’t want to get into a huge debate about the merits of lectures, that is something for a different time. The ability for learners to be able to access a recording of a lecture and listen to it again is something that can support learning.

Students generally used podcasts to review lecture content, especially when they had difficulty with understanding lectures or new terminology.

Scutter et al (2010)

Yes there is another school of thought that says about lecture capture, as Mark Smithers notes:

..the large scale implementation of lecture capture is probably one of the costliest and strategically misguided educational technologies that an institution can adopt.

Smithers (2011)

One of the issues that I have with lecture capture though is that in isolation an audio recording of a lecture is insufficent in meeting the needs of the learners, as Scutter et al (2010) found.

Students generally listened to the recordings whilst viewing the lecture PowerPoint presentations on a home computer.

And as Smithers (2011) recognises

The technology does nothing to engage the student who instead of sitting passively in a lecture theatre checking their text messages will now sit passively in front of a screen at home checking their text messages.

Students will be “forced” to listen to the lecture recordings through a computer as the recording would not be enough to be of benefit without access to other resources such as Powerpoint presentations or handouts. Though is making available an audio recording of a lecture better than not giving the recording? There was an unconvincing study that said podcast lectures were “better” than real lectures. So though there are issues with lecture recordings, we shouldn’t as a result dismiss the usefulness of audio though for enhancing teaching and learning.

Podcasts needn’t be just a recording of a lecture, they can be in a variety of formats. Podcasts are an effective mechanism to support learning. Due to the audio format, learners can download and listen to them on an mp3 player (such as the iPod) as they walk to college, or catch the bus. Likewise mp3 files can be burnt to CD and a learner could listen to them in their car, or on their home music system. Though portable, learners can also access podcasts direct through a computer and listen to them via the computer speakers (or headphones).

Podcasts can be listened to more than once, enabling a learning objective to be reinforced or for revision purposes. Podcasts do not need to be length affairs, ten to fifteen minutes is more than ample for more educationally orientated podcasts.

One way of creating podcasts easily is using a tool such as Audioboo that to put it simply it is an App on your iPhone that allows you to record an audio recording, add your location, a picture, tags and upload the lot to a website.

Of course a podcast implies a regular series of broadcasts, you don’t have to do this to use audio for learning. One off audio recordings are also a good idea and can be used for a range of topics or activities.

In terms of format, the simple monologue is quite difficult to carry off well, where it can work is in describing what is required for an assignment brief. The learner would listen to the recording as they go through a web or paper based assessment for example. I know one lecturer who uses audio recordings of himself asking questions that he plays back both via the VLE and in the classroom.

Conversations, either one on one or small group are one way of making audio recordings more interesting. Panel discussions on topics by staff (or by learners) can be useful for learners and more engaging then a monologue on the subject. Discussions can be formal, informal, light-hearted or confrontational.

Radio type shows with a combination of different formats certainly will work with learners, but require a lot more planning, editing and production. These could include external interviews, outside broadcasts or field recordings.

“Professional” or external audio recordings can be more complex due to rights issues and copyright. However with the right licences (ERA for BBC radio broadcasts) or using statuory rights, it is possible to play back recorded radio programmes and and other commercial recordings in the classroom or lecture theatre for the purpose of instruction.

Audio has possibilities with assessment in terms of giving feedback to learners, the JISC Sounds Good project found that:

Using audio for assessment feedback has proved popular with students and staff. It is probably fair to say most UK higher education lecturers would find it worth giving audio feedback an extended trial with at least with some of their assessment work.

Most of the focus in this blog post has been on practitioners using audio, of course learners using audio is another way to enhance learning.

Learners could use tools such as Audioboo to create a reflective audio learning journal, or create a podcast for their course that demonstrates their understanding of the subject for their peers.

Audio assessment is often used in languages, but why has the tradition died out in other subjects? Yes there are issues with assessing audio recordings, but if we are really interested in personalisation and inclusion, audio assessment can be one way of ticking that box. Also technology can be used to speed the process up, it is possible now to digitally speed up recordings in a way that doesn’t distort voices so that they sound like chipmunks! This can speed up the marking process.

Technology allows us to change where and when we listen. Technology allows us to listen again. Technology allows us to be heard too. There are many ways that audio technology and recordings can be used to enhance and enrich teaching and learning.

References

Bob Rotheram. (2009). Sounds Good: Quicker, better assessment using audio feedback. Available: http://www.jisc.ac.uk/whatwedo/programmes/usersandinnovation/soundsgood.aspx. Last accessed 16th March 2011.

Scutter, S., Stupans, I., Sawyer, T. & King, S. (2010) How do students use podcasts to support learning? Australasian Journal of Educational Technology, 26(2), 180-191. http://www.ascilite.org.au/ajet/ajet26/scutter.pdf

Smithers, M. (2011). Is lecture capture the worst educational technology?. Available: http://www.masmithers.com/2011/03/11/is-lecture-capture-the-worst-educational-technology/. Last accessed 16th March 2011.

100 ways to use a VLE – #73 Hosting audio

Though there are audio sites out there on the web that will host audio. Sometimes you may not want the recording to be public out on the web. In that case hosting audio on the VLE may be an ideal solution.

If the audio is of a lecture on a tricky subject, or contains licensed content that you can place on the VLE, but are not allowed to freely distribute then placing the audio on the VLE may be a better option than uploading to a public site such as Audioboo. Sensitive recordings of research subjects, or patients, are more reasons why you may host the audio on the VLE.

The final key reason you may want to place the audio on the VLE is that the recordings are of or by learners and they may not wish to have their work or their views available publicly. Of course there are sites out there that can be password protected, but at least the VLE does most of that hard work for you.

Audio can be useful to enhance and enrich learning, one lecturer I know records short summaries of the lessons and the assignments. Some learners find this more engaging than reading them on paper or on the screen.

Recording debates and discussions, allows learners to reflect and review them at a time and place to suit the learner, rather than just relying on notes and memory.

By placing the audio on the VLE, you can place it in the context of learning, enabling learners to clarify how the recording works in respect of the rest of the course or topic.

For ease of access, by placing the audio on the VLE, the learners will be able to click and download the audio file to place it on say a mobile device.

Though the server may not be configured to deliver or stream video, it is often much easier (and places less technical demands on the server) for the server to stream audio. From a learner perspective, audio requires less bandwidth than video, so can often be easily accessed on poor internet connections such as mobile internet.

Teaching has had an oral tradition that goes back for thousands of years. With a virtual learning environment, using audio recordings can allow learners a choice on how they access content, instructions and advice.

Edirol R-09HR 24bit 96kHz Wave/MP3 Recorder

We have been using the Edirol R-09HR 24bit 96kHz Wave/MP3 Recorder for some time now and I use it personally too for making audio recordings.

The reasons are:

Records natively to mp3 so need to convert any recordings.

Can record to WAV, useful for those times you want an uncompressed recording for editing and later compression.

It’s robust.

The microphones have been calibrated therefore are excellent.

You can use external microphones with them.

They record to SD Cards. This means that once a student (or member of staff) has made their recording, they can remove their SD Card and pass the Edirol onto the next user. This does mean you don’t need as many for field recordings. It also means you don’t need a computer handy to “take” the recordings off- having said that it does have a standard mini-usb connection if needed.

They run on AA batteries for what seems like forever.

They will record one on one interviews and whole room conversations.

Yes, they do cost more than your average audio recorder, but they are in my opinion real value for money.


100 ways to use a VLE – #74 Embedding audio

Most people I know are aware that you can embed video into webpages from sites such as MoLeTV or YouTube.

It should be noted that many sites that host audio files, also allow these to be embedded into webpages. Of course that means you can usually embed them into the VLE.

A service such as Audioboo allow you to embed audio into a web page so will work on the VLE.

Having recorded your “boo” you can copy the embed code (or copy the embed code of another “boo” that you have found).

This can then be “pasted” into the VLE into a discussion forum, into a webpage or as a label into a topic.

Listen!

The advantage to the learner is that they can just click play without having to worry about clicking a link, opening a new window, click another link to play the audio.

So you’ve embedded the audio, what next?

It’s not just about the audio, you can’t just place the audio on the VLE and expect it to do everything. As with using audio in the classroom, you need to consider the audio in the context of the learning activity.

You may example ask your learners to listen to the audio and comment on the recording in a VLE discussion forum. Another example would be to use an audio recording to reinforce a resource on the VLE.