Sunday 1 August 2021

User Testing

User testing also known as "Usability testing" is one of the main methods used for user testing, at it's core it's basically giving a user a task to accomplish within the system and observing the user try to accomplish that goal. By observing users work with a system you learn

  • What works and what doesn't 
  • Why things work and why some don't
  • User needs you missed or misunderstood

the basic flow for running a "user test" is

  1. Find potential users
    • When picking users, make sure to select ones that are the target audience
    • Pick users that are not current users
  2. Give them tasks to complete within the system
    • Selecting taks is for users to try is much more difficult then it seems
    • Start with the most common tasks
    • Move on to less frequent tasks, focus on the most common tasks and move in decending order
    • Closed ended tasks, 
      • ones that have a clear and defined point of completion
      • have a verifiable outcome
      • follow a predictive path
    • Open ended tasks
      • Are more natural 
      • difficult to asses success because of ambiguity
      • explore paths that may not have been identified
    • Use both open and closed ended tasks.
  3. Observe them compete their tasks
  4. Debrief them after they've successfully or unsuccessfully completed their tasks
  5. Document what you've learned

we do this as part of our assessment iteration so that we can redesign our system to work better.

When selecting our task sets some things to keep in mind are

  • order form easiest to hardest
  • focus on critical tasks the things the system must accomplish
  • should include both "open" and "closed" ended tasks
  • avoid ordering affect: giving a user the answer to a subsequent task in the current one
  • don't lead the user: avoid language that will diverge how to accomplish the tasks
  • avoid ambiguous instructions: when defining you task be specific enough that the user will understand clearly what you want them to do and how, but without leading them.
  • tell the user to indicate that they feel they've completed the task
  • pilot the tasks
    • check the task yourself and have some colleagues try them out to ensure that they meet the above criteria 
Think out loud 
  • participants verbalize out loud what they are thinking as they're accomplishing their tasks:
  • looking for something  
  • reading text
  • hypothesizing how the system might work
  • interpreting system options
  • interpreting system feedback
  • explaining their decisions 
  • feelings: frustrated, happy, etc

it's not normal for users to do this "Think out loud" process, so don't hesitate to remind users that you're interested in how the feel or what they're thinking, use positive reinforcement to coax their thoughts and opinions out.

advantages of this approach are:

  • hear how the user thinks about they task 
  • learn what the user actually sees and notices 
  • hear how the user interprets options and feedback

disadvantages of this approach are:

  • timing: since users are vocalizing what they're doing they wont zip through the system as quickly as the might otherwise
  • Attention to detail: since users are vocalizing and paying more attention to what they're doing they may notice things that otherwise would be overlooked. 
  • users will naturally ask questions, but as the observer you are not suppose to answer them
Post user test (debriefing)
once the user test is complete you can:
  • Review the users problems to try and get more information out of the user
  • ask the user if they find this product useful, if it's something they see themselves using 
  • ask if it was usable, credible, aesthetically good looking 
  • compare it to existing alternatives 
What have you learned?
after you've run your tests and completed you debriefing it's time to summarize your finding, what you should focus on are the critical incidents
  • errors: where users didn't follow the correct path expected didn't do what was expected
  • expressions of frustration: users got stuck, seemed confused as how to proceed 
  • breakdowns: where simple tasks took a long time, or users detoured from the defined journey but still got to where they where suppose to
  • pleasant surprises: things that the user enjoyed, things that where easier then expected
Assess if the user failed or succeeded and to what degree? Capture the users demeanor, where they happy with they system or do they think it's a load of bullocks. most importantly capture as much objective and subjective data as soon as possible, ideally during the test, debrief and directly after. Write down all the critical incidents you can:
  • Mental modal mismatches 
  • Misinterpretations
  • Invalid assumptions made by the system
  • Missing user needs
  • Too little flexibility 
  • Too little guidance 
while summarizing your results you really want to capture overall reactions to specific aspects of the system and link those with the users successes and failures.