A few months ago we received this comment from a user who was looking to cancel their subscription:
“Doesn’t look like I am able to set up HTTP monitoring for non-200 responses.”
We almost lost this customer because some basic functionality in Server Density was not obvious enough.
Unintuitive interfaces are bad news. They prolong learning curves and cause users to make mistakes. They make it harder to learn a new product and increase the likelihood that a user will think a piece of functionality is missing, when in fact it’s not.
Here is the tricky part. Usability is an investment whose value might not be immediately obvious. Software providers often win projects based on front-loaded benefits and quick turnarounds. In these types of projects usability often ends up taking the backseat.
For us, a product team that works full time on improving Server Density, the results of usability efforts are immediate and very quantifiable. You just need to know where to look. You can see them in the number of support tickets associated to a particular feature, for example. You see them in customer satisfaction surveys. And you can most definitely see them in churn, since usability is the number one driver of churn.
If people can’t onboard because usability sucks then they likely won’t get value out of your service. Functionality can be powerful but if it’s not usable then users won’t use it. Which means they won’t be getting any value off of it. Which means you may as well have not bothered.
Good usability unlocks functionality and adds value. Value reduces churn.
How we Improve Usability in Software – 5 Key Principles
While not easy, “measuring” the usability of your product is relatively straightforward. It’s a case of collecting and analysing data points from support tickets, SAT surveys, email, et cetera. What is not as straightforward is how to improve usability. Where do you look? Where do you allocate time, effort and budget? What systems do you put in place?
We don’t have all the answers. As a small team of engineers who are figuring things as we go along, we are constantly fine-tuning our approach. Here is what we’ve learned so far.
Economy of effort
We favour existing components over creating new ones. Creating highly usable interface elements takes significant effort and time. It involves rigorous usability tests, iterations, support tickets, and unit tests. It’s not easy. Any component that’s been in our code base for a while has gone through this process. New ones have not.
Finger on the pulse
Our UI team makes a point of reading every single support ticket so we’re aware of the problems and frustrations our users experience. We take every opportunity to talk to customers, including—but not limited to—our support channel on Slack, meetings with customers, emails and telephone calls, automated satisfaction surveys, and informal chats. The insights gained from these inform our product roadmap, but also day to day decisions like how we implement specific features. It’s easier to know the “right” way to build something when you have dozens of support tickets informing that decision.
Asking questions
We often share designs with our target audiences before implementing them. We then ask questions like: “What does this do?”, “What does this indicate?”, “How would you expect to interact with this?”. Or pointed questions, like “what’s your least favourite area of the UI”, “What one thing would you change about the UI”.
Such rigorous questioning has a tendency to unearth problems with our design language. A dropdown should look like a drop down, for example, and an icon should instantly communicate it’s meaning.
Knowing how to listen and get as much insight as we can from each customer interaction is a key skill. Our aim is to always keep customers “talking”.
Measure against Goals
We define measurable goals for each new feature, say the addition of iPV6 support or the new universal search (both coming soon). The obvious goal here is to achieve a certain percentage of user adoption. We are able to track this using specific events in Totango.
If enough users embrace our features it means they are discoverable and intuitive. It means we did something right with our interface design. Strictly speaking, feature adoption indicates much more than usability (pent up demand, for example) but if our adoption goals aren’t met, we look at usability first. Are users able to discover the feature? Is it intuitive? Does it work?
Observe Real Users
Every feature must go through multiple rounds of usability tests. The most obvious way to do this is with usability tests (next section).
Types of Usability Tests
1. Dogfooding
It’s the ultimate usability test. We always test features internally before releasing them to customers. Almost every day one of our backend developers will raise a Jira ticket for an interface element we should improve.
2. Internal Usability Tests
We run internal usability tests as often and as early as possible (for every feature we release).
We like to test with users of varied technical backgrounds so we can get equally varied feedback. A technical user may navigate our interface with ease. An inexperienced user might struggle. The opposite can also be true. That’s why we test. We try our best not to assume.
Once we’ve extracted every last bit of insight from our internal reviews and tests, we take the most common problems and define action points. We then improve the UI based on these action points before we broaden our scope and repeat the test with real customers. It’s an ongoing iterative process.
3. External Usability Tests
Our customer success manager invites specific end-users to participate in usability tests. It’s usually folks who’ve worked with us for a while and we’ve built a solid relationships with. We try and match specific feature tests with specific customers: customers who we know will make use of this feature, for example. We also pick customers based on their previous feedback or requests and whether they match the target audience for the specific feature we are testing.
How we Run Usability Tests
The way we run usability tests is by no means perfect: It’s a constantly evolving process. But the following steps are what we have found to give a solid ROI.
1. Define a scenario.
Example: “You’ve been trialing Server Density for 2 weeks. You like the product, it’s time to upgrade”. Scenarios are tied to our product feature specs. We define those at the start of a project and share them with a few key customers.
2. Define tasks.
Example: “Do what you need to do, in order for your Server Density account not expire at the end of your trial period.” Be careful not to use words which may bias the results. We find that less specific tasks give the most useful feedback.
3. Run the scenario and associated tasks.
We do this remotely by asking users to share their screen and asking the user to think aloud (see next section).
4. Take notes.
One colleague leads the test, while another takes notes. The engineer who developed the feature is usually the one leading the test. The person taking notes writes down everything, we never assume something is irrelevant.
5. Action
Once we’ve run the test with multiple users, we then consolidate all feedback and construct action points. We take a problem we’ve observed and turn it into “change X”. We implement 90% of actions points immediately and triage the rest.
Some key points:
Be careful not to “help” the user.
As painful as they are (and they are painful) the most useful moments are when a user is completely lost. Instead of helping the user, you should encourage them to . . .
Think out Loud
At the start of the usability test we ask the user to “think out loud”. Without knowing their thought process we can’t know what is wrong, only that something is wrong. If the user goes quiet, we remind them to “think out loud”.
Test the experience, not just the UI.
It’s important you don’t stage the test too much. For example, let the user navigate their way to the form you want to test, don’t drop them straight there. It’s valuable to know that a form suffers from usability issues, but it’s even more valuable to know that the user had trouble getting to that form.
Keep it diverse
We test every new feature with at least two team members. We then improve the UI based on their feedback before we test with customers.
We invite people with little technical experience and people with lots. Both groups are helpful. Feedback items from these two types often contradict each other. When that happens we have to decide which group is closer to our target audience.
Mind you, we don’t always action user feedback. A customer might think they want one thing and really be asking for another. Or their request may not align to where we are going with our product.
Examples of Usability Improvements
Let’s look at a few small examples of improvements (out of many) we’ve made as a result of usability tests, support tickets, speaking to customers and tracking user events through Totango.
Dropdown Menu
Our dropdown component features quite extensively throughout the UI and it’s therefore been the subject of many usability tests.
For example, in order to select the first item in the search results the user had to press <down>, and then <enter>, or move the mouse to click the item. This was adding friction and users complained about it in the usability tests.
Now, when you type a search term the first result is selected by default. All you have to do is press enter.
The improvements didn’t stop there:
- The top level items (Apache, MongoDB, System) now respond to right and left keys to show/hide their children
- We added an (x) item that clears the search
- We fixed a very obscure bug with keyboard controls, where the focussed item (in this case CPU Load) could lie outside of the lists bounds.
- We improved scrolling in long lists. Before usability tests the next focus item could lie below the dropdown bounds which meant that it was not visible.
And we got plenty more improvements in the backlog that we’re yet to implement.
Save Status
“I don’t trust that changes I make in the UI are getting saved”.
We used to get such feedback in various support tickets and discussions with customers.
As is the case with many single page applications, saves happen automatically and behind the scenes. And yet, during our usability tests we observed that users did not feel 100% certain their changes had saved. To cater for that we added two pieces of generic code:
- A Save Status mixin, which listens to the save status of a model or collection.
- A Save Status component, which updates the user when something is saving, has just saved, or failed to save.
Those two pieces of code are now invoked whenever a user makes a persistent change.
What we learned
There is no such thing as a usability silver bullet. Usability is a product of thousands of deliberate improvements. Some of those improvements will be tiny, others will be head scratching monolithic improvements requiring a complete re-think. Here are some things we observed and learned over the years:
- Keep things simple. As designers it’s always compelling to focus on the latest shiny UI pattern. And yet most users don’t care about clever or shiny. Usability tests serve as a good reminder. They encourage us to make things function in the simplest, most obvious way.
- Designers love icons because they present a nice intellectual challenge. Problem is, icons are hard to get right. In our experience users don’t “get” them. At least not until they’ve experimented by clicking on them. That’s not a pleasant experience. It’s also the opposite of intuitive.
- Is the app saving, is it loading, is it in an error state? In the age of single page applications it has become important for interfaces to state what is happening at any given point.
- Adding help text is not a sufficient workaround for most UI challenges. Coming up with short, precise, and explicit on-screen instructions is not easy. Users tend to ignore them anyway. If you find yourself thinking “I need to add some text to explain this”, then stop and think. How can I improve the UI so that help text is not required because users just “get” it.
- On the other hand, error messages and validation are useful. Chances are you’ll need more of them than you expect. If done badly they cause more confusion than they solve. They need to be short, explicit and tell the user exactly what they need to do.
- Speaking of users, the difference between an experienced user and inexperienced user couldn’t be greater. Make sure you test with both.
- Keyboard controls are important, especially for the technically proficient and busy sysadmins (our audience). They are also quite tricky to implement, which is why we use native elements where possible. If some functionality is not available natively, we’ll create a new component, but we’ll try and copy native behaviour as much as possible. We’ll also run it through multiple usability tests to be sure we’ve got this right.
- To increase the baseline usability of our app we needed a simpler interface stack. The switch to React has been huge win for us.
- Poor usability is often a sign of an overly complicated development & design workflow. For example, it’s actually quite tricky to manage how our code listens to the “save” status of something. That’s why we’ve implemented a generic and reusable way of handling it (see “economy of effort” above).
- Finally, bear in mind that in large apps it is easy to break workflows you are not aware of. That’s another reason why you should test experience and not just the UI.
Summary
Since our launch in 2009, we’ve been fortunate enough to develop some long term customer relationships. Most of our growth has been organic, and word of mouth referrals are everything to us.
It’s because we’re in this for the long-term that we believe focusing on usability is one of the best investments we can make.
By the way, investing time on usability helps developer happiness too. That’s because we are releasing higher quality work that we’re proud of.
What about you? How do you measure and improve usability?
If you are a front-end developer and you care about usability as much as we do, we’re hiring.
The post How to Improve Usability in Software appeared first on Server Density Blog.