Skip to main content
Blog

Building a testing service that works for everyone

Kate Every, Service Designer for the NHS Test and Trace programme, looks at how using data and research helped the team meet the challenge of ensuring anyone within the UK population could carry out a lateral flow test for COVID-19.

COVID-19 has impacted every one of us. Our challenge was to build and iterate a nationwide testing service that had to work for people from all walks of life, who were testing for very different reasons.

Collage image showing a hand squeezing a COVID-19 sample from a vial onto a lateral flow device with QR codes overlaid on top.

Someone may test because they wish to visit their elderly mother in a care home. Or perhaps they want to know if they’re infected before going back into the classroom. Or maybe they need to test to go to work today so they don’t lose income for their family. The diversity of our user base is something our team is acutely aware of.

So the environment consisted of:

  • a hugely diverse user base, with distinct needs
  • constant changes to clinical guidance as more was learnt about the virus
  • regional differences in testing policy
  • changing policy demands
  • complex technical infrastructure in a rapidly scaled ecosystem

How do we ensure we hear the voices of our users?

Matt Lund, Lead User Researcher for the digital testing service, has spoken in his blog of the importance of user research in the national roll out of these services.

The pandemic took our usual approaches, tools and methods and turned them upside down. On other projects, we might work on a problem that we understand - at least partly. We would focus on a specific group of users with some shared goals or characteristics. In this case, absolutely everybody was affected in some way.

We have received feedback from over 1.5 million people, and thousands have volunteered to take part in further research. This commitment to a human-centred approach has meant we have never been short of user insight.

At the end of the day this is a programme of national significance with public health outcomes and the economy on a finely balanced see saw.

But this massive wealth of insight is meaningless if the team doesn't listen and translate it into change. I have been privileged to work among a dedicated team who is truly committed to Human Centred Design (HCD). They have always been eager to understand the users we are serving and make sure their voices are heard amid the competing priorities.

There are of course many considerations beyond user needs such as clinical safety, which technology to use, how to integrate with operations and logistics effectively and the speed and scale of building the service has its own challenges. At the end of the day this is a programme of national significance with public health outcomes and the economy on a finely balanced see saw. We need to be pragmatic.

When we looked at the design of the service, we had to be cognisant of the latest clinical guidance, which was changing as clinicians and scientists learned more about the virus. Policy decisions, legal guidelines, technical infrastructure - all of these also factor into the design.

This is not to say that user needs were not always front of mind for delivery teams, but the necessity of fast-paced delivery meant improvements needed to be prioritised against the overall programme roadmap which included antibody testing and PCR testing for symptomatic users.

6 tips to ensure users are still heard in a changing landscape:

1. Understand what your needs are as a service team

As a team work out what your goals are. This will help you focus in on the data that is important to you right now.

2. Gather together all the sources of insight into one place such as a central dashboard

Review all the data and insight that is available about your service. Think broadly about all the places you might be able to gather insight from and aim to get a range of quantitative and qualitative data. We started by brain storming as a team about sources of data, linking into our networks and contacts. We already worked with a lot of this information, but as it wasn’t in one place, gathering comparable insights was a challenge.

We were lucky to be able to get insight from a range of sources:

  • many, many rounds of qualitative user research gives us a view into people’s motivations, concerns, pains and goals
  • 119 call centre data helps us to understand how many users are not able to use the digital service to report their results
  • adobe analytics gives us great insight into how users travel through the service and where their sticking points might be
  • incident reports from the service desk let us know if something had gone wrong so the teams could quickly fix it
  • user feedback via our survey gives us both quantitative measures of satisfaction and ease and further qualitative insight
  • social media sentiment helps us build a picture of our users with quick soundbites, often expressed in real time as people are experiencing testing services
  • user research insights from teams working on other touchpoints (like the offline experiences of acquiring a test or taking a test)

Then identify where your gaps are. For us this meant working with other teams to get access to existing data feeds. Research and analytics were already well-established forms of insight but we had to do some work to find reporting on 119 for example. Once we had identified who our key contacts were, we were able to get the data we needed to give us a rich picture.

3. Define a process and cadence which works for your team

From here we had a massive, rich pool of insight to understand our users better. The user interviews gave us a composite view of their experiences from a range of different angles.

We created a baseline Key Performance Indicator, then collated insight into a central dashboard. This meant that we could always see the latest information available to us and it helped us to decide what our priorities were in consultation with stakeholders.

Defining a process with regular check-ins will ensure you can start to get value from the data. It can be too easy to respond to the next fire and miss out on the consistent feedback that accrues over time.

Check in with the team every week. Ask questions like:

  • Is there anything new or unexpected?

  • Have we heard this insight before? Is it becoming a persistent pain point?

  • Does it throw up any questions, like “why is there a sudden spike in users seeing this specific error”?

  • How do we find out more about this problem that we’ve spotted?

Also cross-check your quantitative data with your qualitative insights. Data gives us context, but it doesn’t paint the full picture. If something is causing you to ask questions, you will need to dig into it, and work out how you can find out more.

4. Identify quick wins

Doing this process on a regular basis means that we have been able to spot quick wins, like improvements to small pieces of content. Larger changes need to be prioritised and scheduled into a future release, but smaller changes can go in alongside our regular weekly release. Small improvements could have fallen through the gaps for "not adding a large amount of value" such as clarifications to the content. But we can spot these and ensure they are delivered without causing any delays to the overall roadmap.

5. Continue to build on your evidence base as you start to design, iterate and deliver

Having easy access to data and insight made it a lot easier to build an evidence base for improvements. We can use user research to help contextualise quantitative analytics data and aggregate satisfaction scores from the survey.

Doing this over time has helped us to build a rich picture of what our users need from the service and makes it easier for our product colleagues to get these improvements on the roadmap.

Of course, this process doesn’t stop when something has made its way onto a roadmap. We continue to research with users, and to track feedback to measure success.

Case study

Helping users to report more quickly for their household

As the service started to scale up, schools testing began to impact millions of users. Parents and guardians of young pupils were required to test their children and report the results back to the Government.

Through research, and our survey, we regularly started to hear that there was a real need for family members to be able to quickly report the results for the entire household. In the digital service, you are able to use NHS Login to save the majority of your details to your test and trace account.

From analytics we’ve seen that reporting is twice as quick for logged in users. Account functionality was only available for individuals, and the technical complexity of creating linked accounts for households meant that it wasn't a quick fix.

We started to capture all data and insight related to this issue. It helped us make a case for prioritising this work. It also helped us to design a solution informed by users' expectations of a “household account”. With initial technical feasibility and service design done, we were able to secure this major feature a place on the delivery roadmap.

We then quickly iterated and refined designs. These were put through several rounds of  user research to understand if they met the needs we had previously identified in the ensuing few weeks. The work was detailed, developed and delivered.


What do you think?

I would love to hear your thoughts on the approach. You can connect with me on LinkedIn.

Interested in working at NHS Digital? Search our latest job opportunities.



Related subjects

Matthew Lund, who heads up the User Research team that works across the online COVID-19 Testing service, talks about the challenges of ensuring user research remains at the forefront of a vital, fast-moving and high-profile programme.

Author

Last edited: 23 December 2021 10:41 am