UX: Accessibility and Security – An Overlooked Balancing Act

 Introduction

When developers think of Accessibility and Security two things often spring to mind: Nielsen’s Usability Heuristics, and the OWASP Top 10. These are standards we use to guide our projects towards being secure and easily used by the end user while maintaining flexibility and expandability of the project. They are excellent standards to follow, but they don’t give a complete picture.

It stands to reason that these are both things that we as developers wish to maximise - we want our clients to find our applications as easy and intuitive to use as possible, and we want users to be secure when they use our software. Often, we think of these things as completely unrelated, what cases are there where these two things collide? And what do we do when you must prioritize one in favour of the other? While perhaps rare, make still these decisions more often than most are aware. In this blog we wade into the muddy waters of compromise.

Setting the Scene:

First let’s talk accessibility and what it means in the context of developing a web application in simple terms.

At the most basic level this is making your application visually consistent - where colours denote purpose (particularly in relation to buttons), pages have similar layouts that visually align with one another and are easy to navigate, and the text is readable. Taking it up a notch to include supporting of multiple client devices – making your pages reactive so they render cleanly on different screen sizes, phones, tablets, etc. often realigning and changing the sizes of important components to account for the limited screen space. So far so good, none of these things impact security, but this is where we then consider a deeper level of accessibility – accounting for differently-abled clients.

The reality is people are different, and as developers we have a moral obligation to ensure that our applications do not exclude people as a result of a flawed design. While it’s easy to say that an application does not or that you had no intention to exclude people consider a classic real-world example: A building with no wheelchair access – the architect probably had no intention of preventing wheelchair-bound people from going into the building, but by not considering them in the design and ensuring a ramp or lift as part of it they are excluded, nonetheless.

Normally the important things to do to prevent this exclusion is to use simple language, have high contrast options available or part of the default theme, keep things simple, and make sure interactable elements are large and clearly identifiable, amongst other things.

So how can security conflict with this? Well let’s look at an example where a simple – but not particularly impactful compromise is made first to get some idea of how these come to exist.

A Basic Example

Consider the following example: you have a user wishing to upload some data into your system, this could perhaps be something simple such as an icon/picture for their profile or something more complex like uploading a spreadsheet to analyse through some process available on your platform.

The user attempts to upload the file and is confronted with an error. Security best practice is to reveal minimal information about internal implementation of a file upload process as this can be a vector for attacks, while the user needs enough information to understand how to correct their file so that it’s accepted.

In most cases this is trivial – you can give several reasons as to why the file may fail without revealing anything particularly meaningful such as the file size being too large, or the dimensions are wrong, but what if there’s a communication error on the back end or to a third-party API? In most cases we fall back on ‘Something went wrong’ as an error and this makes sense – either because it’s hard to explain or as previously mentioned, we don’t want to reveal implementation details. However, it bears keeping in mind that by doing this that despite being very sensible we are necessarily prioritizing security over usability – it would be reasonable when presented with such an error for a user to repeatedly attempt to upload a file that is not going to go through because of a background problem they have not been informed of, especially if they are not tech-literate.

But this is minor and easily changed if we so desired, so when is this a more major consideration?

Security Processes vs Users

The number one time these things come into conflict is when a security process has been put in place which users must interact with in order to access the application. We’ll now explore a couple of examples where this can happen:

Password Complexity

Perhaps the simplest is guidelines required for passwords. At time of writing OWASP recommendations include a passwords minimum length of at least 8 characters, blocking certain common passwords, and a password maximum length. Many applications also enforce the use of special characters, capitalization, and numbers - most people don’t have issues with abiding this and choose appropriate passwords, but if your application’s intended clients are people who’re particularly disadvantaged then remembering passphrases or passwords with special characters may be especially challenging. In these cases, it may be better to reduce the complexity requirements for passwords or looking for an alternative sign-in method.

Multiple Signup checks

Consider the following process: A user is signing up to a web service belonging to a specific organization for their staff members or clients, after creating an account they are notified that they need to activate their account by confirming their email address and that an admin must approve their account, as they need to make sure you are who you say you are.

While this process may sound a bit odd, you may have encountered such a setup when dealing with an online casino, poker application, or perhaps a government service.

Now to technical people this is generally seen as straight forward - you go to your emails, confirm the email and then wait for an approval email to come through, but this can lead to some confusing interactions for people.

Normally the issue is that people often do not view this as two distinct processes, people may interpret an email saying their account has been approved to mean they do not need to verify their email themselves and be confused when they are unable to log in – even with errors indicating that they need to do that verification, and the same is true for once they have verified their account and they’re given an approval error.

While this may appear to be a simple matter of poor communication on the explanation side, tech literacy can often be at fault which can be compounded by conditions such as executive dysfunction – to give a basic idea of how easily this particular process design can cause issues, professionally I have encountered this issue with many people who would not normally be considered ‘disadvantaged’ by any means of the word including those who’ve had the process explicitly explained to them before interacting with it. In general, my personal recommendation would be to avoid this process design unless absolutely necessary.

Requiring Additional or Personal Hardware

A very popular security practice in modern times is to enforce two-factor authentication -typically attaching a phone number to the service. In cases where it is enforced it leaves the perhaps obvious and glaring flaw that anyone who does not have the appropriate personal device of their own simply cannot use the service. Something to consider may be making this optional except for those with elevated permissions within your application if this is something you currently do.

Take Away

This is by no means an exhaustive list, nor is this the only context where this happens on a more major impact level, consider things like ‘does your captcha or similar anti-bot measure have alternative options for the vision impaired?’; ‘How long are your session timeouts against how long a particular process might take someone to do?’.

The main point here is that just because a process is common and makes things more secure, it is not inherently appropriate or desirable for your application. Always consider who your clients are and their needs. While this is not an endorsement of flagrantly ignoring good security practice in your designs – indeed in the right contexts doing any or all of these things may be the correct thing to do - it is instead a call to be aware that some of these decisions do impact accessibility and can potentially exclude the more vulnerable members of our community and to factor that in when we make some of these design choices.

Previous
Previous

My Journey app launch event

Next
Next

Is the full potential of generative AI in legal practice being inhibited?