In this post, standard 8 is being used as an example. All points raised in this post have been made to all 14 standard points where relevant. In addition, there were other minor content changes made to the standards tested in this round of research, that haven’t been highlighted in this post.

What we did

We conducted 8 user research sessions to test users’ comprehension and the content of service standards 6 to 8. Additionally, we measured users’ confidence of applying the standards before and after they read the content during research.

Users included product and delivery managers in DfE.

standard-8.png

What we found

In the ‘Why it’s important section’, users felt that starting ‘you’ll be assessed’ didn’t truly show the importance of the standard and why they need to meet it. Instead, the focus was on getting through the assessment.

Why-important.png

What we’ve done as a result

We've moved this sentence under the ‘How to meet this standard at each phase’ section, to accompany the sentence around it being best practice to meet the standard, even if the team isn’t being assessed. As a result, all mention of service assessments is in a single place.

Why-important-after.png

What we found

The ‘think about’ box was received well by users, however its difference and purpose to the ‘things to consider’ sections was unclear.

What we’ve done as a result

We've changed the heading from 'think about' to 'DfE assessor tips' to focus on the fact that these are tips from assessors. We've also included an introduction line to explain that the prompts have come from DfE assessors.

thinkabout-after.png

What we found

As we don’t have live assessments at DfE and the points raised in the beta and live sections don't differ too much, users felt it was repetitive and unnecessary to have both phases split out separately.

beta-.png live-.png

What we’ve done as a result

We've combined the beta and live sections and called out explicitly where points are more relevant to a particular phase, in the cases that exist across the standards.

at-betalive.png

Measuring users’ confidence survey

What we found

We measured users’ confidence of applying the standards before and after they read the content during research. We found that the average rating of confidence on all standards increased once users had read the standards content for 6 to 8.

bar-char.png


Average confidence rating out of 5
Service standard point Before After
6 3.5 4
7 3.5 4
8 3.4 3.7

What we’ll do next

We'll test standards 9 to 14 in the next round of research with developers, architects, business analysts, performance analysts, delivery and product managers.

Share this page