Sprint planning
Ahead of the design sprint our interaction designer collaborated with the service designer and business analyst to prepare problem-framing workshops, ideation exercises, and decision-making tools to enable us to work through the problem, gain clarity and find a pathway to a working solution.
Design sprint day 1 – Understand
We began with problem framing and a playback from our user researcher on insights and user needs for progress tracking. Next, we collaborated on goal setting, converging individual ideas and then used dot voting to identify the primary user goal.
We then explored the problem of progress tracking faced by users and created a series of actionable How Might We (HMW) statements. We then ideated ways to solve these problems and then prioritised which HMW to focus on first.
To conclude the day, each participant shared a ‘front-of-mind’ solution, the first thing that comes to mind when thinking about the service and then presenting their ideas to the group through visuals and text.
Design sprint day 2 – Ideation
Building on the front-of-mind solutions from day 1 of the design sprint, we started the day with free-flowing ideation, creating a mood board of visuals to stimulate creative thinking. This helped the team to gather a consensus on the style of the visuals. Next we dived into a Crazy 8 exercise, a rapid ideation technique that involves sketching eight ideas in eight minutes which encourages creativity and problem-solving and helps the team to push boundaries.
Each of us then shared our top two ideas with the group. In the afternoon, we sketched our chosen idea, focusing on how it would support the user in achieving their goal. The day concluded with us all presenting our sketches.
Design sprint day 3 – Decide
Working in groups, we refined the ideas presented the day before, narrowing down a solution to turn into a testable hypothesis ready for prototyping. This was followed by risk and assumption mapping, which is a collaborative technique that helps the team identify and assess any assumptions made which might carry risk and to address them before moving forward.
Prototype and testing
After we finished the short design sprint, designers built a high-fidelity Figma prototype of the concept for progress tracking. We made quick decisions around what exactly the concept is and what it includes, as well as key questions we wanted to ask users in testing. We wanted to create a prototype not to be the final design but to be enough to allow users to interact with it. We could guide the user and support them in communicating what we needed to learn.
After the design sprint, a round of user research and usability testing followed. Participants were sent a link to the Figma prototype, and they were asked to share their screen so we could observe their behaviour, and we discussed the service in relation to their needs as they navigated the prototype. The learnings from the usability testing were carefully recorded to be analysed, a slide deck of the findings was produced and presented back to the team by our user researcher.
Key findings from usability testing
Showing users an overall ‘score’
We wanted to see how users would feel about being shown an overall ‘score’ for their use of technology based on the answers they gave us. We weren’t sure on the wording of this so we added an idea to the first part of our service.
In research, some users missed this progress overview and benchmark rating/score. Those who did scroll down to it, found it easily explained what they had done and what was still outstanding. Some users particularly liked the use of red, amber and green colour coding and said they would use it to see what to do next. Some users expected to be able to click into it for more information.
Although we knew we didn’t have the right language for this score yet, it was reassuring to see users confirming this assumption. We can now explore different terms to help users understand their overall progress.
Understanding actions
In the research sessions, actions within the recommendation the service gave them weren’t immediately clear. Users understood what the service was aiming to do, but the language wasn’t clear enough.
Some users in the service are recommended to continue doing something, which they didn’t identify as a clear action. Where there was active language and clear verbs, users felt much more comfortable knowing what they needed to do.
Tracking their progress
We knew that users wanted clearer progress tracking of actions to help them see what they’ve already done and what’s left to do.
When using the new prototype, users liked that actions formed a list they could tick things off. They wanted a detailed checklist which showed what they needed to do to be able to mark something as completed, or to help them see how far along they were with any actions.
Some users wanted more specific guidance on how to track their progress. They were concerned about marking things as completed too early, and what that would mean for how DfE viewed their status with digital technology.
Next steps
Testing our prototype with users prompted some useful conversations that confirmed some of our assumptions and challenged others.
We know that helping our users track their progress will add to the value of service gives users. Next, we’ll be taking the findings of our research and iterating our prototype to respond to user feedback.