For links and passwords to the prototypes discussed in this post, please contact the team on SocialWork.DIGITAL@education.gov.uk
Alongside the work on user registration and account management that we covered in previous posts, we have been scoping out and testing potential designs for the assessment element of our service. This is where early career social workers (ECSWs) will submit evidence and receive feedback from assessors.
When we started designing for assessment, we intended to build a custom service from scratch. We have since decided to use Moodle, an existing open-source learning management system, to deliver the assessment element of the service.
This post covers the design and user research work we did on our previous prototype before making the switch to Moodle. We gathered a lot of useful insights from designing and testing these custom journeys that will continue to be valuable and shape our thinking as we design within Moodle.
Discovery work
As the specific details of the assessments that social workers will complete have not yet been confirmed, we focused on testing broad assumptions and scoping out functions that we are confident the service will need to provide.
Our first job was to:
- understand the current journey for people submitting evidence for their assessments in the existing Assessed and Supported Year in Employment (ASYE) programme
- identify the pain points and user needs that we could meet with this service
We found that the delivery of the current ASYE programme is inconsistent and varies from employer to employer. Evidence is submitted through a mix of email, Sharepoint and online portals. This creates a lot of unnecessary burden for users, who find it hard to track the latest versions of documents and make sure that everyone can access them. We also found that both ECSWs and assessors get frustrated with having to chase each other for work or feedback via email or other channels.
We identified that both ECSWs and assessors need:
- everything to be in one place and accessible by all stakeholders
- a clear way to see what assessment tasks have been done and what is outstanding
- notifications to alert them when something on the service needs their attention
With these user needs we could start to scope out some broad concepts for our to-be user journeys.
Concept testing
We began with four rounds of concept testing with ECSWs and assessors. We used storyboards to test and validate some basic approaches to the assessment process.
The first round tested the assumption that assessment submissions need to be text-based. We presented users with the idea of submitting their reflections or descriptions of submitted work via audio or video recording.
While some ECSWs liked the idea, assessors were concerned these would be:
- impractical to assess
- difficult to compare for moderation purposes
- inappropriate for certain types of assessed activities
These outcomes led us to focus our Alpha and Beta phase service prototypes on text-based webforms. We wanted to focus on improving the process for submitting assessments and reducing user pain points rather than adding new functionality.
Our policy colleagues are consulting with the sector to develop the policy for the social work induction programme, including different types of assessment activity. We are working with an assessment subject matter expert to explore what assessment activities and evidence types are appropriate. This will help us determine whether we need to cater for other submission formats.
In following rounds we tested a proposed journey through the digital service that included:
- submitting evidence
- getting feedback
- tracking progress
The goal was to see if the proposed journey addressed or worsened users' pain points. Our proposed journey broadly resonated with users, and we moved on to developing higher-fidelity prototypes.
Usability testing
After validating our basic concepts, we moved on to creating and iterating a higher fidelity prototype for moderated usability testing. We conducted six rounds of user research with this prototype, looking more closely at specific functionality.
Tasks we tested with ECSWs included:
- choosing the outcomes they are targeting with a submission
- using a web form to draft and submit a piece of work
- choosing an assessor
- scheduling a review meeting
The prototype interface for selecting outcomes
The prototype web form for drafting a submission
We found that some of these tasks did not resonate with users. ECSWs did not feel that choosing an assessor was their responsibility. They also felt that scheduling and managing review meetings within the service duplicates what they already do on Outlook calendars. Based on this feedback, we removed choosing an assessor and adding review dates from the ECSW journey.
We also iterated the design to make it easier for ECSWs to review which outcomes they had selected for a submission and to edit their chosen outcomes before submitting.
Functionality we tested with assessors included:
- viewing an ECSW’s work in progress
- adding summative feedback for a review period
- adding comments to individual submissions
Assessors found the comment functionality useful, and it resonated with their current workflow, but they found the navigation for saving and submitting comments unclear. We also found that the language around first and second review meetings did not resonate with users. We changed this to reflect milestones in the programme (3 month review, 6 month review, etc) to make it easier for users to identify them.
Group exercise
Our last piece of user research on assessment was a group exercise with ECSWs, assessors and coordinators.
We designed an ‘actions’ function on the service allowing assessors to:
- add development goals for ECSWs
- set deadlines for their completion
We tested the prototype in a roleplay scenario to see how feedback and actions might be used during a review meeting. We did this to identify any barriers that might prevent users from recording the information they need on the service.
We found that ECSWs felt:
- left out of the feedback process
- that the proposed design did not address their need for ownership over their development
- the actions did not adequately capture meaningful information about progress towards goals and outcomes
Both social workers and assessors need to track progress against the outcomes over time. This function needs to be developed further to avoid feeling like a tick-box exercise. We will take these insights into account in our future design work in Moodle.
Next steps
Throughout the process of design and testing, we have been feeding back useful insights from our research to our policy colleagues to help to shape the direction of the assessment programme.
We now have a subject matter expert working on the specific design of the assessments, which will help to push our designs forward.
We are working in Moodle to identify components and functions that meet the needs we’ve identified and we’ve created a GitHub repository to store any custom work that we do.
Our design work in Moodle will be the focus of a future design history post.