Assistive Technology

March 20, 2022 Off By JR

Assistive technology devices: using the web

Do you have any experience with assistive technologies?

A few of the alternative input devices covered in this week’s reading materials include:

  • head pointers
  • single switch devices
  • foot switches
  • sip and puff switches, and
  • eye tracking software

Of these I have seen head pointers and sip and puff switches – although previously I had no idea how the sip and puff system worked. The coordination between the software and the sip and puff system makes so much sense once its explained, and I can see how ensuring the DOM is setup correctly would have a huge impact on usability for this tool.

As for eye tracking software, I’ve seen its use in usability testing, but I recall a personal experience trying to use eye tracking as UI in a museum in either Finland or Estonia over ten years ago. As I stood in front of the screen, I would gaze at the menu item I wanted to click and a little progress circle indicator would pop up. If I maintained my gaze long enough for the circle to fill the system would perform the click action.

Augmentative and Alternative Communication (AAC) tools discussed in the reading included:

  • Dynavox AAC
  • Braille display and notetaker
  • dictation software (e.g. Dragon) not just for reading, but acts as a voice user interface
  • electronic magnifiers
  • OCR
  • Screen magnification software (e.g. SuperNova)
  • Screen reader software (e.g. JAWS, VoiceOver, NVDA)
  • Text to speech software

Out of this group of tools I have experience with OCR (ensuring that scans of PDFs and book chapters are high quality and that text is understood by the reader as text), screen reader software (I used Google Voice, a Chrome plugin to do accessibility testing on a recent elearning project), I also use and demonstrate MS Immersive Reader (which has Canvas integration, and now that I have a Windows machine again I see that its included in MS Edge), and finally text to speech software I forget the name of (I used a plug-in for Google Docs to write learning modules for a teaching and learning online course when I worked at uAlberta. I found it simpler to speak what I wanted to write and then to clean up the text afterwards. This also helps because if I type too much my tendonitis flares up).

Were you surprised or intrigued by any assistive technology as described?

I had always been curious about the sip and puff system as I’ve seen it before but never understood how it worked. Seeing the number of switches and controls required, but also the level of detailed activity that it allows the user to control was really interesting. The Braille display and notetaker keyboard was also really interesting. The compact and adaptable design and its ability to be used with different devices was innovative.

Do you believe any assistive technologies may be adopted by able bodied individuals?

I think this is where we start to see some applications under the umbrella of universal design, an design is created to increase accessibility for one group of users but that it can benefit all users. Lets take screen readers (e.g. Immersive Reader) for example. The typical use case that would come to mind is for users with total blindness, but it has implications for low-vision users, users who have trouble tracking lines of text with their eyes, users with dyslexia or other cognitive exceptionalities, and users who do not appear to have disabilities (such as those who would normally be able to read long lines of text on a screen but opt to listen instead).

Another example is the use of closed-captions. These are not only used by users who cannot hear, but often additional language learners, or those in environments where sound is not clear. There was a course I worked on a long time ago, and we received feedback from a mother with a really young child at home taking the course. She watched the lecture videos for the course on mute with the captions on while her newborn slept.

I already gave the example above where I used voice-to-text software to draft text based content as well.

Tools and Techniques

Next up, we explored the Web Accessibility Initiative in a bit more detail. ​​W3C provides some examples of tools and techniques in the following areas:

  • Tools and preferences
  • Perception – hearing, feeling, and seeing
  • Presentation distinguishing and understanding
  • Input typing, writing, and clicking
  • Interaction navigating and finding

Do you have any adaptive strategies?

Using zoom and captions is becoming more and more common in my day-to-day life. When I got my first Windows machine in over 15 years I noticed everything on the screen was huge looking, and assumed it was just a lower resolution monitor than what I was used to. But then I noticed that a lot of things looked kind of blurry, even the text in prompts from Windows itself. I discovered that the default (and recommended) zoom on Windows 11 was 200%! I adjusted the display back to 100% expecting to really love having so much more space to work on my smallish laptop screen, but found I could not read any of the text. I ended up settling on 175% zoom.

Without reading this website I would not have really considered pop-up and animation blockers as an adaptive strategy. Normally I use these blockers just because ads and trackers are annoying more than anything, but I can see how they would be even worse in the case of using screen readers.

Again, I would not have considered spelling and grammar tools as adaptive strategies until prompted by the reading. I do use Grammarly, although not super consistently but also just plain old operating system spell checkers. I’ve also started making some use of predictive text when composing.

Finally, keyword search is a huge adaptive strategy for me. If I cannot use cmd/ctrl+F to search for text on a page or in a document I want to pretty much throw out the work in its entirety. I also get frustrated by systems that don’t allow for searching across pages (aka documents) across portions of the site (I’m looking at you, Canvas).

Can you describe any frustrating digital experiences?

Here I mostly think about mobile browsing personally. A few things that really frustrate the mobile experience include popups or sliders that cover a significant portion of the page, or even how the page loads making simple scrolling impossible until all the junk comes through (I’m looking at you, recipe sites).

Another frustrating thing about the mobile experience is touch target sizes for buttons and links. Sometimes just getting an action to happen on click is a chore, but if buttons are too close together, performing the wrong action is also highly likely.

Screen Reader Basics: VoiceOver – A11ycasts #07

Screen Reader Basics: NVDA – A11ycasts #09

Do you have any prior experience with screen readers?

My most recent experience with a screen reader is using ChromeVOX by Google. I was working on an elearning project, making interactive slidedecks in H5P that needed to meet AODA requirements. In order to test and make edits to the slidedecks I used ChromeVOX to explore how a screen reader would interact with the interactives.

Did you try a screen reader after watching this video?

Not yet. When I previously settled on ChromeVOX it was because I didn’t have a quick and simple way to get my hands on JAWS and was unaware of how to use VoiceOver or NVDA for the project. They are definitely the next ones I will try as ChromeVOX left a lot to be desired.

How important is it for designers to consider voice?

Voice is important for a whole number of reasons ranging from providing access to content and more for users who have varying visual needs. However, from an instructional design perspective it is also very important. When looking at the research on multimedia design for learning, voice is all over the place including voice as a replacement for text (option for learners) and as a supplement. One complaint from many learners about older readers through is the very robotic voice. There was one textbook I can remember I wanted to read, and it had audio recording provided that were auto-generated. I made it a whole five minutes into that book before metaphorically throwing the tape out of my car window. More realistic voices and the right cadence, pitch, and rhythm will be important for clear and engaging communication.

Another short touch-point to mention is voice activated interfaces and voice assistants. We have the computing power now to use voice not only for conveying and distributing information, but for inputting information as well.