
It is encouraging to see that accessibility - the practice of making information available and usable for people with a wide range of functional abilities - is gradually moving to the forefront for businesses in the technology sector. Many tech professionals are also seeking training in WCAG, the World Wide Web Content Accessibility Guidelines, as well as learning about methodologies that encourage inclusivity.
Yet a part of me wonders - are we merely throwing these buzz words into our conversations, without any substantial understanding on the subject? As a user experience (UX) designer who creates websites, apps, and platforms - I had this underlying, gnawing feeling that there was a gap between following accessibility standards like WCAG, and actually designing a product that would genuinely benefit those with various accessibility needs.
So as part of my education training, I decided to volunteer in the Distance Computer Comfort program at Neil Squire Society in order to gain first-hand experience working with someone who had a disability. Neil Squire Society is an organization that empowers Canadians with disabilities through accessible technology devices and programs, and in the Distance Computer Comfort program, I would help a client with disabilities be more comfortable using a computer.
I contacted Neil Squire Society about my interest, and the program coordinator matched me with a client whose needs and goals aligned with the skills I could teach. To keep the client anonymous, I will use the name “Hunter” and the gender pronouns “they/them” to refer to the client throughout this blog post.
A bit of background on Hunter’s disability: they had a childhood stroke that led to secondary dystonia (involuntary muscle contractions that can be painful) affecting one side of their body. Over the years, they had subsequent health issues and two more smaller strokes. This caused some undefinable deficits, a higher degree of stress, some degree of minor movement disorder on one side of the body, and frequent migraines. It can be difficult for Hunter to learn new things as they cannot retain information.
Hunter’s goal with this program was to be more comfortable using technology, especially using the features of Google Doc and Microsoft Word. The program was structured in a way where I met with Hunter 1 on 1 remotely on a weekly basis for 12 sessions. Each session was approximately 1.5 - 2 hrs long, and was conducted through Zoom without turning on our video cameras.
From a UX designer’s perspective, I was often surprised at how a feature that I had initially thought was designed well could unintentionally have a negative impact on the user experience for those who were not comfortable using technology. It saddened me to witness how Hunter would blame themselves for finding it difficult to perform a task, when it was simply perhaps something that wasn’t considered in the product design process.
As it was Hunter’s goal in this program to be more confident using Google Docs and Microsoft Word, I will be sharing examples of their experiences in these two applications. It is also critical to note that Hunter’s experience is not representative of everyone who has accessible needs; yet on the other hand, I believe that we may all relate to certain aspects of Hunter’s experience to some degree.
Another thing to mention is that I was not made aware of the adaptive technologies or accessibility features that Hunter may have used during our sessions together. Therefore, I am unable to comment on how using these technologies (or not) may have impacted Hunter’s experience.
That being said, we’re now ready to dive deep into the takeaways, from three key perspectives, that I learned from my sessions with Hunter.
For the purposes of sharing Hunter’s experience to illustrate what I learned from them, I am going to make the assumption that you, the reader, have had some exposure to using a text editor tool like Google Docs or Microsoft Word.
Here’s a friendly little pop quiz for you: have you ever tried creating a table in Google Docs? If so, without looking at the interface, do you know the name of the menu that allows you to create a table in Google Docs? What about changing the colour of the table cells?
I’ll give you a little hint: those two actions aren’t in the same place. You can create a table by going to the ‘Insert’ menu at the top, but to change the colour of the table cells, you’ll have to go to ‘Table Properties’ by right clicking on the table, or using the ‘Background Fill’ in the toolbar.
Perhaps that wasn’t too difficult for those of you who consider yourselves to be tech savvy. But did you know that the ‘Background Fill’ action disappears when the screen size decreases? It collapses into a ‘More Actions’ menu as there isn’t enough space to display it on a smaller screen.
Can you imagine how all these details of where actions are located, that could also change based on circumstances, could be so discouraging for someone uncomfortable with technology to learn, let alone someone who finds it hard to remember information?
Take a moment to imagine what it’s like for Hunter:
I don’t know about you, but I think I’d be constantly discouraged, frustrated, and overwhelmed.
To be clear, I’m not saying that hiding actions behind a ‘More Actions’ menu is necessarily a bad design. This UX pattern is great for solving the issue of providing available actions for small screen sizes, and also for keeping the interface uncluttered. There is no right or wrong in design, no perfect solution - just pros and cons in seeing what is better in the context of your product. However, we need to be cautious of not using that as a reason to diminish anyone’s experience, especially for those who find it difficult to use technology.
Hunter’s experience reminded me of the foundational principles found in Don Norman’s The Design of Everyday Things, and to think more deeply about them in the perspective of someone who finds it challenging to retain information:
For those of us who consider ourselves to be non-disabled, we generally don’t think twice about using a keyboard or mouse to navigate various applications on the computer. So when a prompt stating that you need to press Control (or Command for Mac users) on the keyboard and then click with your mouse to open a link appears - we probably wouldn’t think much about it.
Hunter, on the other hand, was a little worried upon seeing this prompt. Imagine what it’s like for someone who has mobility issues like Hunter:
Once again, I’m not saying that this Control and click shortcut to open a link is an outright terrible design decision. There are always reasons, things behind the scenes that we don’t have an understanding of. I’m going to take a guess and say that the designers of this feature probably didn’t want the user to accidentally click and open the link when the user probably wanted to click and edit the text.
Thankfully, Hunter was able to find another way to open the link - by right clicking and selecting “Open Hyperlink” in the context menu. I was very glad to see that Microsoft Word provided alternative ways to do the same action. Hunter’s experience reminded me of the necessity of simplicity and variety - and to consider it from the physical perspective of using various devices:
I confess that as a UX designer, I tend to think about accessibility from a visual perspective - colour contrast being the easiest one to address. However, there were other visual elements on the interface that I didn’t realize could cause confusion when its intent was to empower the user.
For example, helpful tips can appear on the interface to educate the user that there are more advanced options. When you’re in the middle of a task and you see one of those pop-ups - what do you do? Depending on the relevance of the tip, you may find it helpful and appreciate learning about a new feature that you can apply in the future. Or you may just skim the content quickly and dismiss it as fast as possible, in order to resume your task at hand. Either way, it’s usually a decision we make on the spot without thinking about it.
But that’s not what it’s like for Hunter. Imagine:
I was surprised (though I shouldn’t be) how some of these tips would stay open and get in the way even when Hunter was doing something else on the interface. Of course, there are valid reasons to do so from a product design perspective. But Hunter’s experience reminded me of how the many accessibility webinars I’ve attended mentioned minimizing the use of dialog boxes - because of its interruption to the user flow and navigational challenges for keyboard-only users - and to also think about visual accessibility beyond size and contrast:
I was humbled by the practical takeaways that served as a great reminder for me as a designer on what to look out for. But the most important lesson I learned from my time spent with Hunter was none of these things.
As a UX designer in the field, the sense I get from most tech companies is that we’ve stripped down the essence of accessibility down to a mere checklist of compliance standards that the product should meet. Ensuring a colour contrast ratio of at least 4.5:1. Including alt text in images for screen readers. Labelling buttons and links correctly.
Don’t get me wrong - these guidelines are wonderful in providing clear guidance for everyone who wants to create accessible solutions. I’m very thankful for WCAG, which was my first exposure and stepping stone into pursuing accessibility.
Yet the strict conformity to these standards seem contrary to the nature of inclusivity - everyone is different, and no disability is alike. I’ve heard countless times that accessibility does not look the same for everyone.
So what was the aha moment, the insightful lesson I learned on how we can better design products and experiences that are inclusive to all, beyond just following a compliance checklist?
I’ll let you think about it before I tell you in Part 2 of this series. Stay tuned!