Skip to main content

https://digitalhealth.blog.gov.uk/2019/04/17/how-were-creating-a-toolkit-for-evaluating-digital-health-products/

How we’re creating a toolkit for evaluating digital health products

Posted by: , Posted on: - Categories: Alpha, Assurance, Services and products, User research
A laptop on a desk
Working on the evaluation toolkit

Imagine you’re a product manager who works in public health. You lead a team in developing a digital health product for quitting smoking. The product has thousands of active users across the UK.

Your organisation wants to know what effect the product has had on people’s health, but there’s no budget available to evaluate it. The team did not build indicators in, so you’re unsure if it’s successful in achieving its intended health outcomes – primarily, quitting smoking.

Evaluation Toolkit

At Public Health England (PHE), we’re working on a project to enable PHE and the wider health system to better demonstrate the impact, cost-effectiveness and benefit of digital health products to public health.

We’re developing an evaluation toolkit, which supports product managers and the rest of their delivery team in building an evaluation strategy into their project from the start. The toolkit helps teams understand if their digital health product has achieved its intended health outcomes.

Proof of Concept

During the alpha phase of this project, we tested the value proposition behind the evaluation toolkit by supporting the PHE Couch to 5K team in building their own evaluation strategy for their Couch to 5K app. We used our discovery research to define how the evaluation service should work, and which steps are crucial to the evaluation process.

Based on our findings, we defined the stages of carrying out an evaluation as:

  • define health outcomes
  • identify success indicators
  • choose evaluation methods
  • analyse data

We tested this process with the Couch to 5K team over a series of workshops, with positive results. The team found that:

  • the tools and templates were useful
  • the evaluation process fitted in with their workflow
  • carrying out the evaluation activities as a team worked well
  • having evaluation experts present was beneficial

Logic Model

Four people add post-it notes to a board
Creating a logic model

A crucial part of defining your digital health product’s outcomes is creating a logic model. We tested the logic model template in the evaluation toolkit with:

  • the Couch to 5K team at PHE, who used it to kick-start their evaluation journey
  • the Health Checks team at PHE, who are working on prevention cardiovascular disease for 40 to 74 year olds, used it to understand their outcomes as a team
  • the Vitamins project at Department of Health and Social Care (DHSC), who are carrying out a project on distributing vitamins to low income families, the logic model helped them to understand their intended health outcomes
  • Digital Health Intelligence team at PHE who used it to align their team around their outcomes
  • NHS Digital who used a logic model to create project specific indicators that measure the benefit of the NHS.UK platform in improving health literacy

These tests allow us to create a template that helps teams decide on the intended health outcomes of their digital health product, and how they’ll achieve them. As well as helping people define their outcomes, it also allowed teams and their wider stakeholders to align on their goals for their project. A Business Analyst in the Department of Health and Social Care shared that:

A logic model is a low risk high reward thing to do. It’s planning one session. It’s very little prep you print a bunch of words and get a bunch of people in a room. If you really don’t need it you’ll be done in 30 mins!

Usability Testing

We carried out three rounds of usability testing with people from digital delivery teams at PHE, DHSC, charities and health start-ups. Based on our findings, we decided to focus on product managers as our primary users because they:

  • oversee the development of digital health products
  • need to understand if the products are successful
  • are responsible for facilitating evaluation within the delivery team

We learned that people were most trusting of evaluation advice which came from colleagues. As a result of this we set up online evaluation communities on Slack and KHub, to give people a space to share evaluation advice.

We also worked closely with partners at NICE, NHS Service Manual and apps library to ensure that the evaluation toolkit fits with their work and can be linked to from their platforms. This way, the evaluation advice will spread through the health system through colleagues.

In our first iterations of the prototype, we included a bank of common indicators. People could browse indicators in their subject area, to get a measure of how well they were meeting their intended health outcomes.

During our research we learned that people felt confident choosing indicators without the bank, and that indicators people chose were often very specific to their product. In the next rounds of testing we redesigned the indicators section so that it included some guidance and no bank of common indicators.

Accessibility testing

At this early stage in the design process, it was important to know whether the service would work for people with access needs. Static versions of the landing page and logic model wireframes from the evaluation toolkit were tested with users who had:

  • Asperger syndrome
  • partial eyesight
  • hearing impairments
  • learning disabilities

We learned a lot from these sessions, and made changes to the prototype:

  • short, succinct content with clear, descriptive headings
  • simpler language
  • written descriptions to support diagrams
  • the ability to print out templates

Academic support

We also carried out a number of testing sessions with academics from Edinburgh University, King’s College London and Imperial College London to get feedback on the evaluation process and understand if our explanation of evaluation is correct. The feedback we received was that we needed to define ‘evaluation’ and ‘evaluation methods’ more clearly, so we spent time tweaking these definitions until we reached an agreement. We also validated the process for evaluation, and added the ‘analyse your data’ page to the homepage, after hearing this was a key part of the evaluation process we had been missing.

Content sense checking

Throughout alpha, we worked on making the language around evaluation understandable to non-evaluation experts. We carried out sense checking sessions, where evaluation and non-evaluation experts gave feedback on the evaluation toolkit. Our findings from these exercises helped us to:

  • agree on definitions of our most important terms, before we started writing and building things for users
  • produce better prototypes earlier, and immediately respond to content-focused insights from early testing

Culture of evaluation

7 people standing up having a discussion in a meeting room
PHE hosted an evaluation event

Alongside the evaluation toolkit, we’re developing an evaluation culture at PHE. A culture that allows time for evaluation and fosters the skills needed to carry out evaluation is crucial in ensuring that evaluation is adopted.

Throughout the alpha phase, we researched and prototyped ways to build the evaluation culture at PHE. We created online channels for an evaluation community, which will continue to grow throughout the project. The Slack community immediately gained interest from people in the public health sector, with very little promotion.

We hosted an evaluation event that brought evaluators and those interested in evaluation together to share best practice. During the event, we encouraged people to share what they wanted out of an evaluation community. As the project continues, we’ll continue exploring what evaluation training could look like, building on the work done during the proof of concept.

We have held face-to-face testing sessions with delivery teams and have received positive feedback about these sessions. During usability sessions, people expressed a need to grow their evaluation skills.

For this reason, we’re further exploring the idea of evaluation training, where people can take part in a day long course on evaluation. The evaluation toolkit would support the training, which teams could continue to use after the training.

Evaluation may also form a part of DHSC’s spend controls, pipeline guidance and assurance process. We’re working to build it in to our approvals and spend control process, so that funding is distributed on the basis of health outcomes. This should incentivise teams to carry out evaluation.

Next steps

We’ll now move into private beta phase. We’ll continue working with a multidisciplinary team, bringing in academic experts in evaluation and developers to build the toolkit. Our team will continue to create an evaluation service that works for delivery teams, so they can understand the impact that their digital health product is having on users’ health outcomes.

Find out more about evaluating digital public health.

Sharing and comments

Share this page