We’d seen through research that service and delivery teams aren’t always aware of the different types of checks and assurance that can support their work in DfE.

There are multiple places for teams to find out about assurance and it relies on a degree of experience for teams to know what to do, and when.

As part of guidance for assessments and assurance we wanted to design guidance for teams at any stage of the delivery or product lifecycle to understand the types of assurance that are relevant to their digital project.

First iteration of the assurance guidance

We knew that awareness of assurance in the discovery phase was low. So we designed a tool that asks a series of questions to determine what assurance is relevant in discovery and how to book it.

In alpha, our scope was assurance in the discovery phase – so this framed the questions and outcome in the tool. We knew through research that teams did not often know about discovery peer reviews as they are not a CDDO requirement. So the assurance tool was a way to validate this option.

This met the following user needs:

As a service team,

we need to know whether we need a discovery peer review,

so that we can assure our work

And,

As a service team,

we need to know what different assessments there are,

so that we can choose which assessment is the most suitable for our team

Second iteration of the assurance guidance

As we moved from alpha to beta, we built out the tool based on what we knew from discovery peer reviews, to include other types of assurance.

We included when teams need an assessment or peer review and assurance guidance to help meet the Service Standard.

Image showing start page for Check what assurance you need tool with a start button

Created content snippets to provide specific guidance unique to the user

Using a decision tree, we created reusable content snippets that would be shown to the user based on their project, product or service.

We tested readability in Hemingway and shared the decision tree and content snippets in the prototype with the design community and DigiComms for feedback.

Ran a design crit with 5 users

We tested the flow and content with a mixture of policy, delivery, product and business partner roles in DfE.

We asked a series of questions based on participants current phase and experience of work.

And we learned a lot. Things had changed since our original scope of discovery peer reviews.

Users found questions confusing, subjective or up for debate within their own teams

Question asking user where they are in their phase, start, mid, end or don't know Is the product or service going to be transactional? User can answer yes or not Are you using a form builder to build this product? User can answer yes or no

Users requested ‘I don’t know’ radio options for the majority of questions. They also suggested that, depending on who was using the tool, people in the same team could give different answers, which would provide a different outcome.

Users challenged content that exists elsewhere

We asked whether services have any defined measures for success. If users replied no, we provide guidance as to what they should be and direct them to GOV.UK.

Have you defined measurers of success for your product or service? Answer yes or no

Users questioned whether an assurance tool is the best place to refer to measures for success and that guidance for measurers for success are captured and supported by other means in the department, for example, within teams or by heads of profession, or content on GOV.UK.

The content was within the tool because it helps teams to apply and meet the Service Standard. But responses from users in the crit caused us to challenge this and question: is this scope creep? Are we meeting the original user needs for assessments and assurance?

Outcome guidance for what type of assurance teams was helpful - but content could work harder

Users liked the outcome page showing what type of assurance they needed – but they also wanted to know why that type of assurance. This would allow them to go back to their team or senior responsible officers (SROs) and say, ‘this is what we need and why’.

Users felt that if they didn’t like the result they were given – some teams may go back and answer the questions again, slightly differently.

Although 1 user could see the benefits of a Word doc of the results being useful, other participants said they would not use this feature.

The ‘start again’ option was seen as unnecessary. The list of entered results, 'Your responses' – in draft form in the prototype - was seen as a helpful record in case things changed for the team. They could return to see how they got the results they did.

Results page telling the user if they need a DfE assessment or a peer review

We revisited user needs and redefined scope

Following the crit and analysing feedback as a team, we revisited the original user needs and challenged whether the tool is still meeting those needs.

This, combined with feedback that answering the questions could result in many ‘I do not know’ answers made us revisit the scope of the tool.

User needs for peer review and assessment awareness

As a service team,

we need to know whether we need a discovery peer review,

so that we can assure our work

And,

As a service team,

we need to know what different assessments there are,

so that we can choose which assessment is the most suitable for our team

And,

As a service team,

we need to know what different assessments there are,

so that we can choose which assessment is the most suitable for our team

Next step: redesign the guidance based on what we learned

User needs and feedback from the design crit validated that the scope of the tool had changed and it no longer met user needs.

Based on what we now know, we need to change the design.

So, we took it back to basics. With flat guidance content and definitions of what each type of assessment and peer review is. Why you would have each one and when.

We also reviewed users experience and feelings of the word 'assurance' - as although that is what assessments and peer reviews are - the language does not sit comfortably with people.

We'll have multiple sources of information upfront in one place.

Guidance will give the user clear criteria for service assessments and peer reviews. The peer review criteria will specifically reference the discovery phase. Upfront guidance will allow users to check what assurance they need and choose what’s most suitable for their phase.

Share this page

Tags

Content design Design Ops