A cybersecurity guy I know once told me that a computer can be truly safe only when it’s unplugged and buried 20 feet underground. Users are the main vulnerability in any system, so security experts’ pipe dream is to hide their clients’ systems away from everyone.
UX designers have dramatically different dreams. Designers believe the users’ journey through the web app should be smooth and easy, consisting of no more than three clicks. They evangelize users’ needs and they are those who prevent security experts from burying computers underground.
Can you feel this tension between designers and security?

Security tries to lock everything down. Interface designers try to make everything easy to use. Looks like integrating security and usability are the two players in a zero-sum game, making something secure inevitably makes it harder to use, and making it easy to use inevitably makes it less secure.
But security systems and usability can co-exist. To debunk the mythical relationship between the two, we need to consider three statements:
- People don’t care about security
- Security ≠ locking everything down
- Design ≠ making things easy
Let's start from the top.
People don’t care about security
Not entirely true. They care about online safety, but in the long-term perspective. Everyone wants their personal data, sensitive information, and property to be safe and protected from potential risks; that’s why people set expensive door locks and security cameras.
But bumping into a sudden security notification is something different, something that can be pretty irritating. Because at the particular moment when you open the particular app, you are doing it for a reason, and the reason is not security, but the job that has to be done.

And a sudden pop-up that tells you “your connection is probably somewhat insecure” is a dusty little roadblock between you and your goal. People don’t care about roadblocks. They look for ways around roadblocks, and always find them.
What if a door you often use requires an entry badge that you don’t have? You’d probably find a brick and use it to keep the door open.
If your employer forces you to reset passwords every month, chances are that you’ll write them down on sticky notes so that you don’t forget them.
An average person has passwords from 100 different accounts. 100 is too many to remember so it’s hardly surprising that people either use very simple passwords or have a few and reuse them for all accounts.

When security is something that obstructs the process, people will find a workaround. Make something too secure, and it becomes less secure. If security features feel confusing or unnecessary, they can deter users from following the intended process and push them toward risky shortcuts.
Overly complex security measures often backfire because instead of protecting the system, they encourage people to bypass the rules altogether.
We can conclude that building security walls in front of people trying to do their job doesn’t work, but there are other approaches that do work.
Critical role of UX security design principles
Thoughtful UX can turn frustrating safeguards into intuitive parts of the product experience. The following principles show how to design security that protects users without getting in their way.
Security ≠ locking everything down
There’s UX security that doesn’t lock anything — it keeps users safe and stays invisible for them. The most effective cybersecurity measures don’t interrupt the user journey. Instead, they work quietly in the background, protecting systems while allowing people to focus on completing their tasks.
The evolution of CAPTCHA is the perfect illustration of how it is possible.
Fun fact: did you know that CAPTCHA stands for “Completely Automated Public Turing test to tell Computers and Humans Apart?” Those security teams can’t help themselves but keep over-complicating things. No wonder that first text CAPTCHAs were insoluble for both humans and bots.
The next generation of CAPTCHAs with pictures where you had to select bicycles, buses, or traffic lights was better, but they still made you feel like you were wasting your life.
Then Google presented its “I’m not a robot” checkbox that let us a sigh of relief. And as a final point, Google gave us “no CAPTCHA reCAPTCHA” that makes you do literally nothing to prove you’re a human.
Malicious bots don’t necessarily watch cat videos on YouTube before proceeding with their malicious business. So if Google AI believes your previous internet behavior appears to be human, it doesn’t make you lift a finger. Only if Google is still unsure of your true nature, it will make you solve CAPTCHA as an additional security measure.

But not all security features can be made invisible, like CAPTCHA. You often need users to perform some action. When you ask them to create strong passwords or enable two-factor authentication, it obviously creates roadblocks on user journeys. Yet the feelings caused by those roadblocks fully depend on the design.
The goal of UX security design is to balance protection with a seamless user experience, where users feel safe without constantly being interrupted by security checks.
Remember the last time an app asked you to create a complex password? It probably made you read a long list of requirements (and question your life choices). And that's if you're lucky. If you’re not, the long password requirements list appeared only after you coined your password — to say your password is invalid because it lacks an uppercase letter, a number, a hieroglyph, and a feather from a hawk.

In this latter case, the interface violates the key principle of human-centered design: visibility. Users should know, just by looking at an interface, what their options are and how to access them. If they don’t know the rules of the game, the error notification turns a little password roadblock into an irritating blockage on their way. We can definitely do better.
Look at Mailchimp’s elegant solution to the password problem. First, they have all the technical requirements laid out, so you don’t have to play a guessing game. Second, requirements update as you type. The way the list greys out items as users type is a great example of a system giving users clear and instant feedback on their actions so that they can make informed decisions. Follow simple design for security principles, and you will minimize the nuisance value of your security measures.

Design ≠ making things easy
Some things are intended to be easy to use but aren’t. Other things are deliberately difficult to use — and ought to be. You can't tell one from the other unless you understand:
- A specific user’s intent
- At a specific time
- In a specific place
Designers can’t assume how users will react to security features. Gathering user feedback through research and interviews helps teams understand where security creates friction and how to improve it.
Such mindfulness is difficult to get. To understand users so deeply, designers need to address the whole design thinking circle:
- To run user research.
- Then crystalize all the findings in user personas and user journeys.
- Make prototypes based on those findings.
- Run user testing to observe what users actually do when using your digital products.
- And conduct usability testing to see how real users interact with security features.
If you don’t have the mood for fooling around real users’ intents and want to start designing right now, you may decide to start with the assumption that users want everything to be easy. That’s a pretty logical but sometimes wrong assumption. Proved by Citibank.
A few months ago, Citibank was trying to make $7.8M in interest payments. It sent $900M instead, and it was recognized as one of the biggest blunders in banking history. How does something like that happen?
It all went wrong because a responsible person failed to check two extra boxes in a form. Let’s take a look at the form:

The screen looks like it came from the early 90s, which is a problem on its own. The interface obviously violates the principles of visibility and feedback. When you’re looking at the form, you have no idea how to make the required payment. When you wonder "What happened?" and "What does it mean?", the interface remains mysteriously silent. All set for a huge mistake.
But what is more interesting is how the system reacts to an inevitable mistake.
When you try to transfer the amount that is a hundred times more than usual, that’s atypical behavior. So atypical, that Google would suspect you are a bot and offer you to solve a CAPTCHA.
If Citibank’s interface had suspected that something went wrong, it would have shown a dialog box, asking whether you really want to transfer nearly $1 billion. If you didn't mean to, you’d decline the action, and the day would be probably saved. But it looks like nothing like that happened because three people reviewed the parameters of Citibank’s fatal payment and OK'ed them without a doubt.
Citibank’s system is obviously overcomplicated if we look at its interface. But it’s oversimplified in terms of security. If you had asked Citibank on that doomsday whether they’d like to have one little cybersecurity UX design roadblock preventing them from blindly OKing atypical operations, they would have definitely said “Yes”.
The clever design should draw users’ attention to peak moments in their experience. Mailchimp illustrates this point perfectly.
You can’t unsend a sent email. That’s why the moment you press the “Send now” button is very tense, even if your email has only one recipient. Not to mention email campaigns aimed at thousands of people.
At this peak moment, Mailchimp shows you a know-how-you-feel little stressed monkey’s hand. It emotionally connects the brand with its users and gently encourages people to be mindful when pressing the button.

UX in cybersecurity and data privacy
Good UX is a crucial element of modern cybersecurity. Many security issues happen not because security tools fail, but because people don’t understand how to use them. Poor user interfaces, confusing privacy settings, or unclear warnings can expose products to cyber attacks, data breaches, and risky data collection practices.
To prevent this, many UX designers work closely with the cybersecurity team when building secure digital products. Here are practical ways UX can improve cybersecurity practices and data privacy.
1. Structure security settings with clear information architecture
Security and privacy settings should be easy to find and understand.
Good information architecture helps users quickly locate important controls like:
- password settings
- multi factor authentication.
- device management.
- data privacy and data collection permissions.
When security options are scattered across the interface, users are more likely to ignore them.
2. Make secure actions clear and understandable
Security prompts should explain why something is happening.
For example:
- why a login attempt is blocked.
- why a password must meet certain requirements.
- why additional verification is required.
Clear explanations help users trust the system instead of trying to bypass cybersecurity measures.
3. Continuously test security flows with real users
Security features often behave differently from normal product flows, which is why teams should continuously test them.
Use usability testing to observe how people interact with:
- authentication flows.
- password creation.
- multi factor authentication.
- security alerts.
Testing helps identify confusion early before it becomes a vulnerability.
4. Design warnings that highlight risky actions
Security UX should draw attention to moments where mistakes can cause serious damage.
Examples include:
- transferring unusually large amounts of money.
- deleting important data.
- changing privacy settings.
These checkpoints help prevent security issues without slowing down everyday tasks.
5. Build security into the product experience
The best security products protect users without constant interruptions.
Effective design aims for an optimal balance between protection and usability:
- everyday actions stay simple.
- unusual behavior triggers additional security checks.
- privacy controls remain transparent.
When UX design process supports security, organizations reduce risk while delivering a seamless user experience.
Wrapping up security UX design principles
Before we finish, let’s quickly recap the key UX security principles discussed above and how they help balance protection with usability and user satisfaction:
- People don’t care about security. Make something too secure, and they will find a way to bypass it.
- Security ≠ locking everything down. If you want people to stick to your security measures, you should ideally make them invisible. If you can’t, use UX design to make them feel less like obstacles blocking users on their user journeys.
- Design ≠ making things easy. If you find yourself making everything fast and easy, you probably need to understand users’ intent deeper. Sometimes you need to slow people down to highlight what's important.
Security without design irritates people so much that it becomes ineffective, and design without security gives people no support in critical situations. So it turns out that user experience and security are far from being divorced. The best-kept secret of security and UX design is that they actually can't survive without each other.
Take one of our latest clients, Polaris. It is a security app that helps to find vulnerabilities in code — that is, their job is to bombard their clients with irritating security notifications. Polaris quickly understood the product design should do its job really, really well to make all those notifications bearable for users. So they came to Eleken for UI/UX design services, and we managed to balance security with usability to minimize friction and ensure a delightful user experience.

Want to know how we did it? Read the full story in the Polaris case study.


