Thursday 5 August 2021

Micro Usability testing

A usability test is a formalized test in which participants are drawn from the target audience, to use pragmatic approach early on we can leverage a "Micro Usability test" this is a scaled back version of a user test:
  • Users can be "Close Enough" to the target audience (whoever is willing and able)
  • Fewer tasks (2 to 3 tasks) closer to 15 to 20 minutes instead of (5 to 10 tasks) 1hr to 1.5hrs
  • No screen recording of user actions
  • No video recording of user
  • No logging of user actions
  • No questionnaires regarding users persona 
The goal is to just capture the 2 to 3 biggest takeaways from the micro usability test, there's no need for correlating the data and doing a in-depth analyses of the what why's and how's

When conducting a micro usability test you want to come up with 2 to 3 tasks:
  • each task should be presented to the participant separate from the other tasks
  • order the tasks from easiest to hardest
  • the tasks should be clear concise
  • tasks should have a clear and defined solution
  • When completing the tasks the user should use the "Speak out loud" technique
  • and when they think they're complete the user should notify the tester
once all the tasks are complete, or the user gives up this is the time for the debrief, this is the testers opportunity to engage with the tester and ask them:
during this point you seemed "surprised", "frustrated", "confused"; could you tell me why, what was wrong with the system. etc it's your opportunity to really investigate and find out what the user was thinking or feeling.

after your ad-hoc investigation you should ask some predefined questions for general feedback
  • have you ever used a product like this? why or why not?
  • do you see yourself using something like this why or why not?
  • some questions that are particular to what your testing.
Once the test is run and the test and post test data is collected, well its time to compile it all into a micro usability test report. The report should consist of 3 sections:

1) Key observations
A few paragraphs about key observations throughout the test
  • Describe the participants, write a persona (Who they are, what kind of experience do they have, with similar systems and with technology over all)
  • How the test went overall
  • Success rate of tasks
  • Partial or complete failures of tasks
2) Problems
Focus on the top 3 to 5 biggest problems observed and diagnose the cause of those problems, things to focus on:

  • What worked well? What didn’t? 
  • What were the most confusing or frustrating aspects of the interface? 
  • What errors or misunderstandings occurred? 
  • What did users think about the interface? 
  • What would they like to see improved?
3) Recommendations 
list the main issues that where brought to light and back them up with evidence from the test, and propose recommendations as to how to rectify the problems.