Background
Across the Funding Service, operational data is spread across multiple systems, teams and formats. It is frequently copied, reworked and moved between tools to meet different needs.
This means that:
- manual extracts and reconciliation have become routine
- data quickly becomes outdated
- there is no single, trusted version of the truth
As a result, much of the service is operating on duplicated, inconsistent data, with significant effort required to maintain it.
The impact on the service
This fragmented data landscape creates challenges across the service.
Product and delivery
- Teams spend time compensating for data issues rather than improving systems
- Data and configuration changes must be rebuilt and re-checked across multiple tools
- Innovation slows as effort is focused on keeping the service running
Funding operations
- Repeated QA and assurance processes
- Time is spent waiting for data to be reconciled and re‑processed before work can continue
- Capacity is absorbed by inefficiency rather than delivering value
Data standards and assurance
- Multiple versions of data reduce confidence and consistency
- Audits rely on manual checks and offline evidence
- Increased operational risk due to poor traceability
The cost of this is real - duplicated storage, repeated processing, and significant people time - but it is spread across the service and largely hidden.
Our challenge
From a service design perspective, the core challenge is:
How might we create a single, trusted source of funding data that reduces duplication, improves consistency, and supports the whole service to operate more effectively?
To answer this, we needed to understand:
- where and why data is duplicated across the service
- how data flows between systems, teams and processes
- which parts of the process introduce risk, delay or inconsistency
- what a ‘trusted’ data source would need to provide for different users
Our approach
We worked across service, product and technical perspectives to:
- map how operational data flows through the service end-to-end
- identify where duplication, transformation and manual handling occur
- quantify the impact on teams and processes
- explore what a centralised data model would need to support
This shifted the conversation from isolated data issues to a system‑level design problem. It highlighted that the core issue is not how data is used, but how it is structured and managed across the service.
Addressing the problem at the source
Current systems were not designed for the scale and complexity we now operate at.
Without change, the service increasingly relies on:
- manual workarounds
- duplicated effort
- people absorbing complexity
This has led us to define an Operational Data Layer (ODL). This is a single, shared source of funding data for the service.
The ODL is designed to:
- validate, update and manage data in one place
- ensure changes happen once, rather than being recreated across systems
- provide clear approval and audit trails
- allow updates to flow automatically across the service
This moves the service away from managing multiple versions of data, towards a single, controlled and reliable source. Complexity is handled at the source, rather than being absorbed downstream by people and processes. It also allows data to move flexibly between the data layer and individual systems, rather than following fixed, sequential paths through the service.
Enabling our North Star
The Operational Data Layer provides the data foundation that enables the Funding Service North Star.
It directly supports key outcomes by enabling:
- Single source of truth: one shared place to edit, use, send and check funding data
- Improved data flow between systems: data can move independently between the Operational Data Layer and individual systems. This removes fixed dependencies and enables greater agility and efficiency over time
- Standardisation with intentional variance: funding models configured using standard patterns, with differences designed and controlled, rather than emerging through workarounds
- Data-driven front stage: accurate data flows through to statements and outputs, reducing errors, rework and provider queries
- Reduced failure demand: eliminates manual extracts, spreadsheets and duplicated QA, lowering operational cost and risk
This is the foundation that underpins and enables wider transformation across the service.
What this means in practice
For product and delivery
- Less time firefighting data issues
- Fewer dependencies between systems when making or releasing changes
- More time to run, improve and modernise systems
- A stable foundation to deliver change safely and at pace
For funding operations
- Reduced reliance on manual QA, reconciliation and workarounds
- Less capacity tied up absorbing inefficiency
- More predictable workloads and improved team experience
For the wider service
- More consistent outputs for providers
- Fewer errors, queries and rework
- Lower operational risk as complexity is handled by the system, not people
Next steps
To support the development of the Operational Data Layer, we are:
- developing baseline metrics to make current cost and effort visible
- prototyping the ODL to test feasibility and impact
- working with funding operations to quantify efficiency gains
This will allow us to demonstrate measurable improvements as the approach evolves.
Outcome
The Operational Data Layer establishes the foundation for a more scalable, consistent and data-driven funding service.
It enables:
- reduced operational cost and risk
- improved confidence in funding data
- faster, safer delivery of change
Most importantly, it moves complexity out of people and processes, and into the system, where it can be designed, managed and improved over time.