Evaluating a Digital Experience

Evaluating a Digital Experience

March 26, 2022 Off By JR

How to Manually Check Your Website for Accessibility

Why is manual accessibility testing important?

The article states that automated accessibility checkers can only correct around 30% of accessibility issues. That number surprised me, but thinking back on my recent experiences that actually makes a lot of sense. For example, the accessibility checked included in Canvas seems to only check for basic things. I regularly notice in courses that either colour contrast is not enough or that alt tags are not provided with images. I don’t tend to do manual testing in these cases. However, on a recent project which had to meet AODA I tried checking my work periodically using a few automated tools and found myself just manually testing the interactive slide decks. I ended up doing a process very similar to what was described in the article: get to the page, use only the keyboard to navigate, use a screen reader, etc. It made me choose different types of interactions in a few places as a result.

What is the difference between manual testing and user testing?

The article describes user testing as more specific goal or task oriented testing (i.e. can the user achieve this task, in how long, in how many clicks, with how many errors) and manual accessibility testing as broader. I would lump manual accessibility testing in with other forms of QA testing, some of which is described well by Connie Malamed over at the eLearning Coach. With manual testing you’re trying all possible avenues, as much as possible, to see where things might become unusable or unclear. With the more streamlined pathways in user testing it can be easy to miss problems that aren’t on the road.

Whose job is it to run manual tests (designers, developers, others)?

This, I would say, depends on the organization you’re working in. I have been in places where I am the only instructional designer and the job entailed everything from analysis, through design and development, all the way to evaluation. In that case, unless I hired a third-party, it was my job to make sure the quality of the products being shipped were up to a certain standard. In other shops I’ve been in, I was not responsible for the development portion, and QA testing was left to the folks who built the classes in the LMS. In hindsight, I probably should have been ensuring courses were being built to some kind of accessibility standard, as the project manager. This is something I can consider and take forward in my current role.

Web Accessibility Evaluation Tools List

A few weeks ago, I think, David Cormier from Windsor Univerisy was asking about accessibility checkers people in educatoinal technology were using. Wave came up a few times, and I contributed Google Lighthouse (one that I had been using for a recent project, and is built right into Chrome’s developer tools) Well, Web Accessibility Evaluation Tools List includes over 160 tools. That is a little overwhelming, but feels like a treasure trove.

How many tools did you try (without being prompted)?

Three that I would like to try, but don’t have an ePUB handy, are Accessibility Score, Ace, and Make-Sense. I do work with a platform that publishes webbooks and the platform has the ability to export ePUBs. It will be really interesting to compare the web accessibility to the ePUB version on the same book.

I’ve been using a MAC for the last, maybe, 15 years. This year I am trying out windows again, so I decided to try out Accessibility Insights for Web, by Microsoft. It is a set of extensions for your webbrowser to test sites. Overall, I think bookmarklets and browser extensions would be the most useful for most of my work as much of my work is either in Canvas LMS or WordPress. Although, looking down this list I am going to have to get axe Accessibility Linter for Visual Studio Code by Deque Systems, Inc. to use with VSC.

Did you find any tools particularly useful?

The AWI extension seems like a really useful tool and the 5 minute check-up is much more powerful than expected. The visual helper is simple (red outlined boxes) and the breakdown it provides is easy to follow:

  1. Path
  2. Snippet (the html)
  3. How to fix (a explainer of what is throwing the error)

There’s also a usefulĀ Highlight Visible button to toggle the overlay. With some other tools I’ve used previously the overlay gets in the way a great deal.

Did you generate any unexpected results?

I think most newcomers to accessibility like myself will focus in on things like alt tags for images, and colour contrast, which of course is important. But after using some of these tools it’s become apparent the rabbit hole goes much deeper, and one thing that regularly comes up are ARIA classes which I had zero idea about until I ran a test on one of my sites that threw like 50 ARIA issues.

Involving Users in Evaluating Web Accessibility

How would you change your testing criteria to include disabled users?

In Saskatchewan, we don’t have formal legislation like ADA in the USA or AODA in Ontario, so our testing criteria – specifically for online courses where I am currently – could be described as adhoc at best. We do have a partner unit for Access and Equity Services that we have a good relationship with. Side note: I had a session scheduled to check out assistive tech available in one of the libraries, but the pandemic closed the university before this meeting. I’m hoping to reschedule after everyone returns to campus.

The way online courses tend to be deployed allows for revisions and improvements as a retrospective activity rather than a preemptive one. So, to answer this question is a bit tricky because testing is pretty limited, and in many cases items are published with little time for revision if any. Overall, I think getting back to revise the testing criteria and process overall needs to be looked at, and at that time a consideration for including students and staff with accessibility needs in that assessment would be important. So maybe, step-1 is to put together a working group on testing protocols for online courses, and including disabled users at that table to set forth the process and criteria from the ground up.

Why is it difficult to draw conclusions from single instance user testing?

A couple of things come to mind immediately. One is that the assistive technology one specific user uses may interact with the product differently than another piece of technology. For example, ChromeVOX may read a webpage a little differently than JAWS, and than NVDA. The second is that the adaptive strategies adopted by one user may not be adopted by similar users (similar according to however they’re groups in the testing protocol). Generalizing research results is always tricky, and something that I think off the cuff is problematic with many of the claims about Universal Design. The best we can do, I think, is create solutions that are inclusive of users as best we can, and when we discover a case where our product is not inclusive that we add that case, revise, and approach things from a continuous improvement perspective. There can be no one-and-done approach to creating digital products while claiming it is universal, or totally accessible.

How could you make this whole process easier?

After completing the reading, it seems that a reasonable and efficient approach to maximizing the accessibility of a digital product would be four components: design and development standards, automated testing, internal manual testing, and finally user testing. It would be the hope that standards for development and automated testing would catch the vast majority of problems with the product. Manual testing – as described above – may catch problems with items such as keyboard navigation. Finally, the user testing could focus on the experience itself and catch issues that exist between the user and the product rather than internal to the product itself. Some of the solutions needed may be technical at that final step, but others may have to do with copy or other non-code specific issues.

Tips for Conducting Usability Studies with Participants with Disabilities

At what stage would you recommend user testing for accessibility?

As mentioned above, other forms of testing would be completed first. In general terms for any project that may vary widely depending on what the product is. For example, user testing for an app generally might occur with paper prototypes. I would guess that for most digital products, the development would be fairly far along before testing and that it could be for specific types of interactions (a sub component of a larger product) or a product nearing completion. Multiple rounds of testing is sometimes required for products but is not always conducive to the overall plan.

How would you approach recruitment for user testing?

In my current role, I would likely have to enlist the assistance of our AES team to put out a call for participation in user testing for online courses or elements of online courses. As the article states, being proactive about specifying the types of interactions we will be testing and the participants we hope to have for user testing is important. Further, as mentioned above as well as in the article, having a baseline established so that the user testing can focus on the interactions and experience and that basic things such as alt text or colour contrast don’t interrupt the testing.

What challenges can you imagine with remote accessibility testing?

User testing in person can be tricky no matter what, and introducing a technology mediated testing protocol for testing digital products adds a whole new layer of complexity. I have been doing a lot of remote assistance for digital products including online courses, but also helping a family member learn Garageband so they could put music together for online church services. The mediation technology can really get in the way of what you’re trying to do. In addition to that, one of the things the article mentions is about encouraging participants to bring their own assistive tech, but having a backup plan. The backup plan becomes complicated in remote settings. Do you send additional tech to participants? Do you have adaptive strategies in place? What if there is a catastrophic failure of any of the technology involved in the testing either communication, recording, or anything else? These are some general results, but like all things design it comes down to the challenges may be specific to the context and tasks we have set out for a particular project.