What we did

We spoke with 6 service assessors from across DfE with a range of assessing experience. From recently trained to xgov lead assessors. There was a mix of lead, research, design and tech assessors.

We wanted to understand how assessors understood the assessment report journey for service assessments.

We hoped to learn how users navigated the journey, what problems they faced and how they interacted with the prototype. The journey included finding pre-assessment information about a service and completing and submitting the report. The report design included CDDO’s new beta-tested RAG rating. This will replace the current ratings of 'met' and 'not met'.

What we found

The prototype design and navigation tested well, however the process move to a RAG status for actions, the 14 Service Standard points and the overall service outcome had mixed reviews.

Some felt it would be difficult to do this for all types of services, others felt it looked more flexible on the outset, but by RAG rating actions – as well as each Service Standard, and the service as a whole - it makes the process too prescriptive. ​

How users differentiate between improvements for teams and actions wasn’t clear. It risked duplication or incomplete sections of the report, which will lead to further problems in terms of service assessment reporting. 

Users understood and navigated the dashboard landing page

"Looks like a place I can find out about the assessments I'm assessing." Participant 6
"I quite like this – it's obvious – I'm here to assess...." Participant 4

dashboard.png

Slight confusion over ‘Team’ and ‘Panel’ in side navigation

We saw some users assume they would see the team of assessors in ‘Team’ section of the lefthand side-navigation, so selected this screen to see who the other assessors were. It only became clear that ‘Panel’ contained assessors when they later clicked into it.

report-sidenav-team-panel.png

We included more description in side nav options

We considered how to make this clearer to users and added more description to the side navigation.

Slight confusion over ‘Stage’ label and what this means

This was a hangover from a previous iteration when we tested stages throughout the service lifecycle, for example, pre-assessment, and post-assessment. This is no longer relevant as the service will contain discovery peer reviews and service assessments for alpha, beta and live.

So we changed ‘Stage’ label to ‘Type’ to reflect assurance type

The service will include discovery peer reviews and service assessments. By changing ‘Stage’ to ‘Type’ and including these two options should make it clearer to assessors to confirm assurance type.

Context about the service and assessor panel in advance is always useful

The uploaded artefacts page, assessor panel and service team panel were all easy to navigate and most users commented on the value of having this information before an assessment.

report-atrefacts.png

As heard in previous rounds of research, users appreciated having visibility of who else was involved in the assessment, from both the team and assessor view. It enables them to make contact beforehand and afterwards.

Users raised that some key roles were missing from the panel (although this is a prototype) and had some cross over, for example, lead and product assessor.

We’ll consider pre-populating roles for assessors specific to the phase

It’s noticeable when key roles are missing, so we can code the service so that there should always be a lead, tech, research and design assessor on the panel.

report-assessor-panel.png

Users skipped guidance content for writing the report

Users consistently skipped over ‘How to write the report’ content and went straight to the first page of the report – which they appeared to understand with ease.

Delete ‘How to write the report’ page and include RAG guidance at the points people need it most

report-start-page.png

Report section page tested consistently well

This page tested well with users, particularly with the lead assessors. It acted as a key feature of understanding progress.

All user recognised this page to write the report against each standard, and most users had a good understanding of which parts they would lead on or contribute to.

Although, a couple of users asked if it could be explicitly stated who leads on which standard, but recognised this was highlighted when clicking into the individual standards. ​

Users were able to navigate to the right standard they needed to complete and add content to.

report-report.png

Assessor guidance expected to include specific standard guidance and RAG rating information

Users expected to read content here that reflected what they as assessors should be considering for this standard at this phase.

Additionally, due to CDDO moving to a RAG rating from ‘met’ and ‘not met’, users expected guidance here to advise how they should consider the using the RAG rating scale. ​

Some users struggled with the side navigation changing, as the previous page had the side nav from a higher layer, and this page has the side nav on all the standard points. With this, we saw some users struggle to get back to the report section view and clicked in multiple places to get back.

report-assessor-guidance.png

We'll include RAG rating guidance and consistent service navigation in the side navigation

Content will continue to reflect what assessors should assess against for each standard. But it will also include relevant RAG rating guidance content lifted from the ‘How to write the report’ page.

The side navigation will include the higher level side nav from previous screens, to help users understand where they are and how to navigate back to previous sections.

The difference between 'Improvements' and 'Actions' wasn’t clear

When an assessor added advice to improve the service, they were often adding the same content to the ‘Actions’ section too. This causes duplication of content and tasks. It wasn’t clear when something could be an ‘Improvement’ and when something is an ‘Action’.

Being able to watch users navigate their way through the report showed us that this first iteration of the process can be tightened. report-improve.png

report-action.png

We simplified the journey and removed the 'Improve' screen

We considered how ‘improve’ and ‘actions’ could be incorporated together to encompass comments for the team to consider and actions to be captured. We removed functionality for each ‘Action’ to be RAG rated, and kept it at Service Standard level.

Assessors understood RAG ratings for each Standard

In rating the entire Standard, users unanimously understood that this would replace ‘met’ and ‘not met’ from the previous way of working.

The logic of scoring a standard a red rating, would mean the entire service would be rated red, was well understood. But - users didn't necessarily agree that 1 action should determine the entire outcome.

Summary of system usability score

Overall the usability of the service tested positively. With users giving an average score of 82.5 out of 100, which is noted in the acceptable range and towards the excellent scale.

Slide19.png

Feedback from users included things like:

"It will save me time going round loads of Teams sites and through my emails trying to find stuff"
"Overall the site was intuitive. I believe it will improve the assessment experience for assessors and product teams."
"Using the GDS design system should greatly increase the accessibility of the user journey"

What’s next

Run a DfE and xgov content crit for the service assurance: step by step guidance, making sure we include xgov assessors who’ve been testing out the new RAG ratings for CDDO.

We'll also run a show and tell with CDDO of where we are up to with the prototype and continue to work in the open to share learnings.

Share this page

Tags

Content design User research Design Ops