Usability, Customer Experience & Statistics

Measuring Usability with the System Usability Scale (SUS)

Jeff Sauro • February 2, 2011

It is the 25th anniversary of the creation of the most used questionnaire for measuring perceptions of usability.

The System Usability Scale (SUS) was released into this world by John Brooke in 1986.

It was originally created as a "quick and dirty" scale for administering after usability tests on systems like VT100 Terminal ("Green-Screen") applications.

SUS is technology independent and has since been tested on hardware, consumer software, websites, cell-phones, IVRs and even the yellow-pages.

It has become an industry standard with references in over 600 publications. 

The System Usability Scale

The SUS is a 10 item questionnaire with 5 response options.
  1. I think that I would like to use this system frequently.
  2. I found the system unnecessarily complex.
  3. I thought the system was easy to use.
  4. I think that I would need the support of a technical person to be able to use this system.
  5. I found the various functions in this system were well integrated.
  6. I thought there was too much inconsistency in this system.
  7. I would imagine that most people would learn to use this system very quickly.
  8. I found the system very cumbersome to use.
  9. I felt very confident using the system.
  10. I needed to learn a lot of things before I could get going with this system.

The SUS uses the following response format:


Scoring SUS

  • For odd items: subtract one from the user response.
  • For even-numbered items: subtract the user responses from 5
  • This scales all values from 0 to 4 (with four being the most positive response).
  • Add up the converted responses for each user and multiply that total by 2.5. This converts the range of possible values from 0 to 100 instead of from 0 to 40.

Interpreting SUS Scores

Despite the wide usage of SUS, there has been little guidance on interpreting SUS scores, acceptable modifications to the items and information on reliability and validity. 

Over the years I've used SUS a lot in my own research and during usability evaluations. During this time I've reviewed the existing research on SUS and analyzed data from over 5000 users across 500 different evaluations.

This data shows that SUS is a reliable and valid measure of perceived usability. It performs as well or better than commercial questionnaires and home-grown internal questionnaires. 

I've put these findings in a 150 page detailed report which contains valuable insights on background, benchmarks and best practices for anyone using the SUS. Here are a few highlights.

What is a Good SUS Score?

The average SUS score from all 500 studies is a 68. A SUS score above a 68 would be considered above average and anything below 68 is below average. 

The best way to interpret your score is to convert it to a percentile rank through a process called normalizing. I've created a calculator and guide which takes raw SUS scores and generates percentile ranks and letter-grades (from A+ to F) for eight different application types.

The graph below shows how the percentile ranks associate with SUS scores and letter grades. 

This process is similar to "grading on a curve" based on the distribution of all scores. For example, a raw SUS score of a 74 converts to a percentile rank of 70%. A SUS score of 74 has higher perceived usability than 74% of all products tested. It can be interpreted as a grade of a B-.

You'd need to score above an 80.3 to get an A (the top 10% of scores). This is also the point where users are more likely to be recommending the product to a friend. Scoring at the mean score of 68 gets you a C and anything below a 51 is an F (putting you in the bottom 15%).

SUS Scores are not Percentages

Even though a SUS score can range from 0 to 100, it isn't a percentage.  While it is technically correct that a SUS score of 70 out of 100 represents 70% of the possible maximum score, it suggests the score is at the 70th percentile. A score at this level would mean the application tested is above average. In fact, a score of 70 is closer to the average SUS score of 68. It is actually more appropriate to call it 50%. 

When communicating SUS scores to stakeholders, and especially those who are unfamiliar with SUS, it's best to convert the original SUS score into a percentile so a 70% really means above average.

SUS Measures Usability & Learnability

While SUS was only intended to measure perceived ease-of-use (a single dimension), recent research[pdf] shows that it provides a global measure of system satisfaction and sub-scales of usability and learnability.  Items 4 and 10 provide the learnability dimension and the other 8 items provide the usability dimension. This means you can track and report on both subscales and the global SUS score.

SUS is Reliable

Reliability refers to how consistently users respond to the items (the repeatability of the responses).  SUS has been shown to be more reliable and detect differences at smaller sample sizes than home-grown questionnaires and other commercially available ones.

Sample size and reliability are unrelated, so SUS can be used on very small sample sizes (as few as two users) and still generate reliable results. However, small sample sizes generate imprecise estimates of the unknown user-population SUS score. You should compute a confidence interval around your sample SUS score to understand the variability in your estimate.

SUS is Valid

Validity refers to how well something can measure what it is intended to measure. In this case that's perceived usability.  SUS has been shown to effectively distinguish between unusable and usable systems as well as or better than proprietary questionnaires.  SUS also correlates highly with other questionnaire-based measurements of usability (called concurrent validity).

SUS is not Diagnostic

SUS was not intended to diagnose usability problems. In its original use, SUS was administered after a usability test where all user-sessions were recorded on videotape (VHS and Betamax). Low SUS scores indicated to the researchers that they needed to review the tape and identify problems encountered with the interface. SUS can be used outside of a usability test for benchmarking, however, the results won't shed much light on why users are responding the way they are.

Modest Correlation between SUS and Task-Performance

Users may encounter problems (even severe problems) with an application and provide SUS scores which seem high. Post-test SUS scores do correlate with task performance, although the correlation is modest (around r= .24 for completion rates and time),  which means that only around 6% of the SUS scores are explained by what happens in the usability test.  This is the same level of correlation found[pdf] with other post-test questionnaires.

Quick and Not So Dirty

At only 10 items, SUS may be quick to administer and score, but data from over 5000 users and almost 500 different studies suggests that SUS is far from dirty. Its versatility, brevity and wide-usage means that despite inevitable changes in technology, we can probably count on SUS being around for at least another 25 years. 

To help you in your next study with SUS or to interpret your existing SUS data I've assembled a comprehensive guide on how to use benchmarks, compare SUS scores and find the right sample size for your study.

About Jeff Sauro

Jeff Sauro is the founding principal of MeasuringU, a company providing statistics and usability consulting to Fortune 1000 companies.
He is the author of over 20 journal articles and 5 books on statistics and the user-experience.
More about Jeff...

Learn More

UX Bootcamp: Rome: June 20-22, 2016 and Denver: Aug 17-21, 2016

You Might Also Be Interested In:

Related Topics


Posted Comments

There are 26 Comments

January 23, 2016 | Edwin wrote:

I am finalizing my dissertation proposal and would like to use the SUS as one of the instruments in my study comparing the usability of a traditional graphing calculator with a graphing mobile application. As I work on the instrumentation section of my paper and provide research basis for the committee and research consultant, could you provide me with the name of any studies, besides Tsai, Kuo, Chu, and Yen (2015), which utilized the SUS with the two factors (usability and learnability)? Thank you. 

December 13, 2015 | Tay wrote:

can the SUS method of scoring and the meanings of the scores be measured the same even if I change most of the questions and the number of questions? Your help would be very appreciated. 

November 17, 2015 | anonymous wrote:


August 3, 2015 | Brittney wrote:

I'm not entirely clear on how we're supposed to be measuring usability and learnability? I get that questions 4 & and 10 are are representative of those, but cannot find anywhere on how to come up with those scores? 

July 30, 2014 | rasheed wrote:

thnx good 

June 17, 2014 | Ram Ghimire wrote:

Hi Jeff, rnThank you for the article on SUS. rnI wonder what would be the implication if we start from strongly agree and end with strongly disagree.  

May 7, 2014 | asdf wrote:


May 1, 2014 | Michael wrote:

Am I the only one who subtracted Even numbers from the question 5 not the number 5 ?  

April 16, 2014 | Richard wrote:

Do you think the SUS could be used as a valid measure of document usability. By document I mean a physical or digital instruction book/manual. In this case the instructions relate to evacuation plans for a health facility. The 10 terms of the scale seem to apply well in the context of the documents and the circumstances they may be used in. Do you believe the results would be as valid? 

April 15, 2014 | Pam Murphy wrote:

Hi Jeff, rnrnThanks, that's the most comprehensive article Ive found on SUS in trying to establish if its a quantifiable metric and one that can provide learned outcomes ..however I find this observation a little confusing :rnrn...In fact, a score of 70 is closer to the average SUS score of 68. It is actually more appropriate to call it 50%. rnrnWhen communicating SUS scores to stakeholders, and especially those who are unfamiliar with SUS, it's best to convert the original SUS score into a percentile so a 70% really means above average.rnrnto me, it would be more transparent to use the same interpretation regardless of who you are communicating with so we can have a common understanding of the results. rn 

February 13, 2014 | eric wrote:

Hi Jeff, how would one mitigate against users answering the first question negatively due the user not having many opportunities to use the product?

For example, a manager might have a need to use a tool only once a month, and on those occasions would prefer to use this tool over any others. 

October 15, 2013 | CAROLIN wrote:


June 22, 2013 | Diego Dabrio-Polo wrote:

Great article, indeed!
Thanks to it I decided to use SUS to test for usability an application that I had developed for my Master's degree final project. 

April 10, 2013 | Nacho Pastor wrote:

We are working on translating SUS to spanish, but do you know if there are some issues to take into account before tranlating?, I mean thinks like copys, verbs, adjectives, etc.rnrnI would like to know if there is something special with the copys.rnrnThanks from Barcelona. 

April 5, 2013 | Melissa Sombroek wrote:

I saw that there is a Dutch version of the SUS. Do you know if there is information about the validity of the Dutch version or who translated it?rnrnYou wrote that there was a Dutch study who used the SUS. Can you tell me which study that was?  

April 2, 2013 | Timo wrote:

Sorry Jeff about the earlier question: I asked it too quickly. Now I find that SUPR-Q is the option for evaluating web services. However, I have the same setting than Erin does: I need to evaluate a (government) web service with no purchasing or bill-pay functions (and thereby the question "I feel comfortable purchasing from this website" does not seem appropriate) 

April 2, 2013 | Timo Jokela wrote:

Is SUS applicable for evaluating usability of systems other than work systems? Such as web services for citizens? rnrnI find some questions of SUS "work oriented", such as "I think that I would need the support of a technical person to be able to use this system".  

October 24, 2012 | Scott wrote:

I'm not sure I understand this comment "only around 6% of the SUS scores are explained by what happens in the usability test." What other input for evaluation is the participant using if the participants only exposure to the system or website is the exposure during the test (i.e. no pre-existing knowledge or experience)?rnrnJust curious as I try to understand some recent results. rnrnthank you. 

October 21, 2012 | Will wrote:


Thanks for the great resource. I have been using SUS for testing usability of mobile websites, coupled with some other usability experiments. The SUS provides a great benchmark for identifying opportunities, but I wonder if there is any precedent on how the 68 average translates to mobile usability? Do you think it still serves as a good average comparison?

March 7, 2012 | Jeff Sauro wrote:


Good question. See the article How much does the usability test affect perceptions of usability? where I explored how much the usability study impacts SUS scores.

I also agree with your take. A stand alone SUS score is probably closer to a user's attitude about their experience with a product, rather than specific tasks or experiences from the usability test--which may not represent their actual use (and their likelihood to recommend or repurchase the product). 

March 6, 2012 | Dimiter Simov wrote:

Jeff, I wonder how reliable SUS is when delivered stand-alone.rnrnThe general practice is to deliver the survey after usability testing. However, one can just send it over to a bunch of users and ask them to fill it out about a product or service. rnrn1. Do you know whether scores differ when the survey is delivered in this manner as compared to scores received after usability testing? I would expect some differences.rnrn2. Can we assume that the score from a stand-alone delivery is more holistic: users are evaluating their complete experience with the product, not just what they covered during the test? I would say, yes. 

January 31, 2012 | Jeff Sauro wrote:

I happen to have a Dutch version of the SUS that we used in a test a few years ago. It seemed to work well.

Item 5 basically means that the functions in the software aren't split up or segmented in a way that requires the users to constantly change modes, navigate trees or stop what they're doing to do an associated task.

You'd likely benefit from the SUS guide and calculator package.


January 31, 2012 | Beant wrote:

Hi!rnThanks for the article. I am planning on using SUS but am not sure about the meaning of Item 5 - "I found the various functions in this system were well integrated." I would like to provide a hint to the users about what does this mean? Could you perhaps help? 

November 28, 2011 | Keith Posner wrote:

This is an excellent article and answers all my questions about how to interpret the SUS score. 

October 26, 2011 | Elise wrote:

Loved this article, thanks! I am pushing to use the SUS for an upcoming study. The percentile vs. SUS score chart was especially helpful to communicate the SUS score meaning, THANK YOU! :) 

March 23, 2011 | Joshua Brooks wrote:

You've done some outstanding work with the SUS. After viewing Brooke (1996) and deciding to go with the SUS for my Master's project in Human Computer Interaction at Iowa State (developed a small homeschool planning interface and needed to measure usability), I was glad to find this article, and your work with James Lewis, as well as the Bangor, et al, (2008) and Tullis & Stetson (2004) articles that really confirm that this brief 10 question survey is well worth it.  

Post a Comment


Your Name:

Your Email Address:


To prevent comment spam, please answer the following :
What is 5 + 3: (enter the number)

Newsletter Sign Up

Receive bi-weekly updates.
[5766 Subscribers]

Connect With Us

UX Bootcamp

Rome:June 20-22 & Denver: Aug. 17-19 2016

3 Days of Hands-On Training on UX Methods, Metrics and Analysis.Learn More

Our Supporters

Use Card Sorting to improve your IA

Loop11 Online Usabilty Testing

Userzoom: Unmoderated Usability Testing, Tools and Analysis


Jeff's Books

Customer Analytics for DummiesCustomer Analytics for Dummies

A guidebook for measuring the customer experience

Buy on Amazon

Quantifying the User Experience: Practical Statistics for User ResearchQuantifying the User Experience: Practical Statistics for User Research

The most comprehensive statistical resource for UX Professionals

Buy on Amazon

Excel & R Companion to Quantifying the User ExperienceExcel & R Companion to Quantifying the User Experience

Detailed Steps to Solve over 100 Examples and Exercises in the Excel Calculator and R

Buy on Amazon | Download

A Practical Guide to the System Usability ScaleA Practical Guide to the System Usability Scale

Background, Benchmarks & Best Practices for the most popular usability questionnaire

Buy on Amazon | Download

A Practical Guide to Measuring UsabilityA Practical Guide to Measuring Usability

72 Answers to the Most Common Questions about Quantifying the Usability of Websites and Software

Buy on Amazon | Download