A cybersecurity guy I know once told me that a computer can be truly safe only when it’s unplugged and buried 20 feet underground. Users are the main vulnerability in any system, so security experts’ pipe dream is to hide their clients’ systems away from everyone.
UX designers have dramatically different dreams. Designers believe the users’ journey through the web app should be smooth and easy, consisting of no more than three clicks. They evangelize users’ needs and they are those who prevent security experts from burying computers underground.
Can you feel this tension between designers and security?
Security tries to lock everything down. Interface designers try to make everything easy to use. Looks like security and usability are the two players in a zero-sum game — making something secure inevitably makes it hard to use, making something easy to use inevitably makes it less secure.
But security and usability can co-exist. To debunk the mythical relationship between the two, we need to consider three statements:
- People don’t care about security
- Security ≠ locking everything down
- Design ≠ making things easy
Let's start from the top.
People don’t care about security
Not entirely true. They care, but in a long-term perspective. Everyone wants their personal data and property to be safe, that’s why people set expensive door locks and security cameras.
But bumping into a sudden security notification is something different, something that can be pretty irritating. Because at the particular moment when you open the particular app, you are doing it for a reason, and the reason is not security, but the job that has to be done.
And a sudden pop-up that tells you “your connection is probably somewhat insecure” is a dusty little roadblock between you and your goal. People don’t care about roadblocks. They look for ways around roadblocks, and always find them.
What if a door you often use requires an entry badge that you don’t have? You’d probably find a brick and use it to keep the door open.
If your employer forces you to reset passwords every month, chances are that you’ll write them down on sticky notes so that you don’t forget them.
An average person has passwords from 100 different accounts. 100 is too many to remember so it’s hardly surprising that people either use very simple passwords or have a few and reuse them for all accounts.
When security is something that obstructs the process, people will find a workaround. Make something too secure, and it becomes less secure.
We can conclude that building security walls in front of people trying to do their job doesn’t work, but there are other approaches that do work.
Security ≠ locking everything down
There’s UX security that doesn’t lock anything — it keeps users safe and stays invisible for them. The evolution of CAPTCHA is the perfect illustration of how it is possible.
Fun fact: did you know that CAPTCHA stands for “Completely Automated Public Turing test to tell Computers and Humans Apart?” Those security people can’t help themselves but keep over-complicating things. No wonder that first text CAPTCHAs were insoluble for both humans and bots.
The next generation of CAPTCHAs with pictures where you had to select bicycles, buses, or traffic lights was better, but they still made you feel like you were wasting your life.
Then Google presented its “I’m not a robot” checkbox that let us a sigh of relief. And as a final point, Google gave us “no CAPTCHA reCAPTCHA” that makes you do literally nothing to prove you’re a human.
Malicious bots don’t necessarily watch cat videos on YouTube before proceeding with their malicious business. So if Google AI believes your previous internet behavior appears to be human, it doesn’t make you lift a finger. Only if Google is still unsure of your true nature, it will make you solve CAPTCHA as an additional security measure.
But not all security features can be made invisible, like CAPTCHA. You often need users to perform some action. When you ask them to create strong passwords or enable two-factor authentication, it obviously creates roadblocks on user journeys. Yet the feelings caused by those roadblocks fully depend on the design.
Remember the last time an app asked you to create a complex password? It probably made you read a long list of requirements (and question your life choices). And that's if you're lucky. If you’re not, the long requirements list appeared only after you coined your password — to say your password is invalid because it lacks an uppercase letter, a number, a hieroglyph, and a feather from a hawk.
In this latter case, the interface violates the key principle of human-centered design: visibility. Users should know, just by looking at an interface, what their options are and how to access them. If they don’t know the rules of the game, the error notification turns a little password roadblock into an irritating blockage on their way. We can definitely do better.
Look at Mailchimp’s elegant solution to the password problem. First, they have all the requirements laid out, so you don’t have to play a guessing game. Second, requirements update as you type. The way the list greys out items as users type is a great example of a system giving users clear and instant feedback on their actions. Follow simple design for security principles, and you will minimize the nuisance value of your security measures.
Design ≠ making things easy
Some things are intended to be easy to use but aren’t. Other things are deliberately difficult to use — and ought to be. You can't tell one from the other unless you understand:
- A specific user’s intent
- At a specific time
- In a specific place
Such mindfulness is difficult to get. To understand users so deeply, designers need to address the whole design thinking circle:
- To run user research
- Then crystalize all the findings in user personas and user journeys
- Make prototypes based on those findings
- And run user testing to observe what users actually do when using your digital products
If you don’t have the mood for fooling around real users’ intents and want to start designing right now, you may decide to start with the assumption that users want everything to be easy. That’s a pretty logical but sometimes wrong assumption. Proved by Citibank.
A few months ago, Citibank was trying to make $7.8M in interest payments. It sent $900M instead, and it was recognized as one of the biggest blunders in banking history. How does something like that happen?
It all went wrong because a responsible person failed to check two extra boxes in a form. Let’s take a look at the form:
The screen looks like it came from the early 90s, which is a problem on its own. The interface obviously violates the principles of visibility and feedback. When you’re looking at the form, you have no idea how to make the required payment. When you wonder "What happened?" and "What does it mean?", the interface remains mysteriously silent. All set for a huge mistake.
But what is more interesting is how the system reacts to an inevitable mistake.
When you try to transfer the amount that is a hundred times more than usual, that’s atypical behavior. So atypical, that Google would suspect you are a bot and offer you to solve a CAPTCHA.
If Citibank’s interface had suspected that something went wrong, it would have shown a dialog box, asking whether you really want to transfer nearly $1 billion. If you didn't mean to, you’d decline the action, and the day would be probably saved. But it looks like nothing like that happened because three people reviewed the parameters of Citibank’s fatal payment and OK'ed them without a doubt.
Citibank’s system is obviously overcomplicated if we look at its interface. But it’s oversimplified in terms of security. If you had asked Citibank on that doomsday whether they’d like to have one little cybersecurity UX design roadblock preventing them from blindly OKing atypical operations, they would have definitely said “Yes”.
The clever design should draw users’ attention to peak moments in their experience. Mailchimp illustrates this point perfectly.
You can’t unsend a sent email. That’s why the moment you press the “Send now” button is very tense, even if your email has only one recipient. Not to mention email campaigns aimed at thousands of people.
At this peak moment, Mailchimp shows you a know-how-you-feel little stressed monkey’s hand. It emotionally connects the brand with its users and gently encourages people to be mindful when pressing the button.
Wrapping up security design principles
- People don’t care about security. Make something too secure, and they will find a way to bypass it.
- Security ≠ locking everything down. If you want people to stick to your security measures, you should ideally make them invisible. If you can’t, use UX design to make them feel less like obstacles blocking users on their user journeys.
- Design ≠ making things easy. If you find yourself making everything fast and easy, you probably need to understand users’ intent deeper. Sometimes you need to slow people down to highlight what's important.
Security without design irritates people so much that it becomes ineffective, and design without security gives people no support in critical situations. So it turns out that user experience and security are far from being divorced. The best-kept secret of security and UX design is that they actually can't survive without each other.
Take one of our latest clients, Polaris. It is a security app that helps to find vulnerabilities in code — that is, their job is to bombard their clients with irritating security notifications. Polaris quickly understood the product design should do its job really, really well to make all those notifications bearable for users. So they came to Eleken for UI/UX design, and we managed to balance security with usability to minimize friction and ensure a delightful user experience.
Want to know how we did it? Read the full story in the Polaris case study.