Successful user experiences (UX) develop through understanding of a target audience. Websites must account for users’ desires, abilities, and limitations in crafting effective user interactions. In some cases, companies are certain they have executed this process only to find that customer acquisition rates are still low. How should an organization proceed from here?
Learn Why Metrics Are Important
Despite the best intentions and planning, sometimes websites are not effective. It is important to have a neutral means by which to test what is and is not working. Using measurement systems allows developers to pinpoint problem areas and adjust them.
Even successful websites benefit greatly from user metrics. Stakeholders appreciate identification and quantification of customer conversion, acquisition, and activity. Creating transparency throughout ongoing development lays the groundwork for better-informed choices. User metrics highlight a company’s unique position in the marketplace and its value to customers.
A number of measurements test UX satisfaction, the most common of which we will explore here.
Task Level Satisfaction Measurements
To gather metrics on task level satisfaction, some companies use questionnaires that measure the ease with which users complete tasks. To be effective, these surveys must be sent out immediately after a task is attempted, whether or not it is completed. The quickest inquiry that some companies employ is a Single Ease Question (SEQ). The one question is generally straightforward, like, “How difficult was this task on a scale of one to five?” This quickly creates a data point with the use of a multiple-choice answer.
Another single-answer test is the Subjective Mental Effort Questionnaire (SMEQ). Similar to the SEQ, the SMEQ asks users one question relating to one task. However, in this case, the questionnaire asks users to answer on a scale system, specifically in terms of mental effort required to complete the task.
A slightly larger set of inquiries is the After Scenario Questionnaire (ASQ). Consisting of three questions, these are quick and easy for users to answer. ASQs ask for a satisfaction rating on the ease of the given task, the time it took to execute it, and the support the user feels he or she received along the way. Though not a comprehensive set of metrics on its own, this ASQ does provide useful data representing user experience.
A more complicated tool used to test task level satisfaction is the NASA-TLX. This two-part questionnaire begins with a series of six questions relating to various demands placed on the user – mental, physical, and temporal. Additionally, it asks the user to rate their completion in terms of performance, effort, and frustration. The second section of the NASA-TLX asks users to rank each of the six elements discussed, based upon which they found to be most important in executing the specific task. If a user found a task to be time consuming for example, he or she may rate the temporal section as more important than the physical.
Task level satisfaction metrics quickly point out problem points, allowing developers to make adjustments to increase UX satisfaction. Ongoing UX research is crucial to pinpointing techniques and layouts that work. Questionnaires such as these are useful in mining clear and direct data points. However, they do not create an overarching view of the UX.
Test Level Satisfaction
Test level satisfaction measurements cover a broader spectrum than task level questionnaires. Given to users at the end of a session rather than an individual task, these surveys inquire about the overall usability of a website experience. The two most common forms of these questionnaires are in multiple-choice format, asking users to pick from one to five – strong agreement to strong disagreement with given statements.
The shorter of the two test-level questionnaires contains eight questions. Called the SUPR-Q, this test gathers metrics on the perceived usability, design, and reputability of a site. Generally, two questions are devoted to the website’s usability and navigability. The following four inquiries pertain to how trustworthy the users think the site is. Finally, there are two questions relating to site loyalty and two based solely on appearance.
The longer test-level satisfaction surveys contain 10 questions. Aptly named, the System Usability Scale (SUS) revolves entirely around the user’s perception of the given system. Questions range from whether the user is inclined to return to the site to whether he or she thought the system was overly complex.
Much like the task-level satisfaction measurements, the test level metrics highlight problem areas in a website. Many companies find them useful in improving their UXs, while others compare their scores to others. By getting a sense of where an organization ranks next to competitors, businesses gain a better understanding of what they need to improve.
Task Completion Rates
Task completion rates are a very informative set of metrics to consider examining. Without needing any active input from users, system administrators can easily gather the relevant data. In order to calculate the overall completion rate of a site, simply count the number of tasks started and how many were executed fully. Divide the completed tasks by the total number of tasks, and then multiply that answer by 100.
Example: 1,000 tasks, with 240 completed.
240 divided by 1,000 = 0.24
0.24 multiplied by 100 = 24
The total completion rate in this example is 24%, which is much lower than the 75% industry standard.
Choosing individual tasks to measure is also a valuable option. Cart abandonment, for example, is a commonly gathered metric. The average rate of abandonment across industries is about 68.63%. Long checkout processes and unintuitive cart designs often lead to these abandonments. By testing exactly what percentages of users fully execute transactions, companies are able to assess whether their checkout procedures need streamlining.
A combination of overall and targeted task completion rates are ideal for making informed decisions about which elements are not working in maintaining a website. For a comprehensive overarching view of UX satisfaction in a website, some companies are turning to Google’s HEART framework.
Google’s HEART Framework
Google developed a framework that can be applied to all or part of a website to measure UX satisfaction. The elements of HEART are:
- H – Happiness. Usually gathered with questionnaires, the happiness factor calculates the user’s attitude, satisfaction, and perceived ease of use.
- E – Engagement. User involvement with a site is measured by certain activities such as visits per week, shares, or files uploaded.
- A – Adoption. User acquisition is a major factor in determining UX satisfaction. The adoption section looks at new user purchases, subscriptions, or upgrades.
- R – Retention. The retention part of HEART examines how many existing users are renewing their subscriptions, repeating purchases, and returning over time.
- T – Task Success. Similar to the task completion rates described above, this portion measures profiles completed, upload times and search result success, among others.
In order to use the HEART system, begin by determining the overall goal for your site. With this aim in mind, decide what signals would be good indicators of success. From there, choose a set of metrics described above that will provide the necessary signals. By tracking the right metrics, your company will gain valuable insight into what works and what needs improvement.
To remain competitive in a sea of websites, a high UX satisfaction rating is vital. By employing the above measurement methods, companies gain valuable insights into what does and does not work. Consistently monitoring trends and movements in your website will give you the edge that leads to higher retention and customer conversion rates.