Digital health interventions – like the apps found in the NHS Apps Library – have enormous potential to improve the public’s health.
At Public Health England (PHE), we want to make sure our digital health interventions work well. So we’re carrying out a project – Evaluation of digital public health – to help better understand their impact, cost-effectiveness and benefit to public health.
We are in the alpha phase of designing a process to support teams working in this space, and we’re testing it here at PHE on our own digital health interventions.
One of the ideas that emerged from the project’s discovery phase was "evaluation thinking": an ambition to embed evaluation concepts, skills and tools into the culture of teams that design for digital public health.
Before we could draw inspiration and start designing prototypes, we had to understand PHE’s existing culture. We spoke to people both inside and outside PHE to understand:
- what propels an evaluation forward?
- what gets in the way of a successful evaluation?
- which people support the evaluation process?
By asking these questions, we’ve begun to understand how to create an environment that encourages evaluation – and started coming up with ideas for how to get there.
Here’s what we found:
Building a culture of evaluation
“Setting up the processes and infrastructure that allows better evaluation to be done. Being clear about what the evidence bar is, and then making sure that there's a process in place for things to happen.” – Nick, PHE.
From our research, we know there are teams in PHE where evaluation is already highly valued, forming an integral part of how these teams work.
But right now, their activities are hidden in small pockets of the organisation. And from speaking to evaluators outside PHE, we know they’re most effective when part of a wider culture that supports and values evaluation.
Therefore, we need to foster an organisation-wide culture where:
- the evaluation of digital health interventions is the norm
- evaluation activities are well understood
- the role of the evaluator is well supported.
Connecting with other evaluators
“One of the difficulties I’ve had is that there are teams working on something similar [evaluation] that I wasn't aware of, because it wasn't mentioned by anybody on the project. Maybe people weren't aware of it.” – Natalie, PHE.
Currently, people doing evaluation in PHE feel isolated. They want more opportunities to meet other evaluators, and learn from their experiences of the process.
We want to support a new community of practice by creating an organisation-wide culture of evaluation. For the first time, evaluators across PHE will be able to connect with one another, share best practice and get valuable support and feedback.
Designing evaluation into projects from the start
“But if you want to evaluate something it needs to be planned and factored in as much as the build of whatever it is you're doing needs to be factored in. It should never be seen as separate because what that means is, and this is the reality of the [project], is that you are then playing catch up.” – Yasmin, PHE.
Evaluation should be designed into a project from the start, even if the evaluation activities themselves don’t need to begin until later in the project.
For example, product managers should budget for evaluation activities – like creating a logic model – with the rest of the delivery team. This means evaluation will be an integral part of projects – and not just an afterthought.
Choosing evaluation methods that allow for iteration
“That's been the problem up to now with randomised control trials. Those things can take two years, and it just doesn't work on the cycles that we work on.” – William, PHE.
Teams often evaluate digital health interventions using traditional academic methods. But these complex, time-consuming approaches are often in conflict with the iterative, fast-changing nature of digital projects.
The digitised nature of these interventions also means that academic methods alone aren’t always the best way to measure their outcomes. This project will suggest specifically digital evaluation methods where appropriate, alongside traditional academic and health-economic approaches.
Making decisions with support from stakeholders
We developed an evaluation stakeholder map that shows how different stakeholders have very different interests in the results of an evaluation. As such, it’s crucial that evaluators engage with each of their stakeholders.
The people we spoke to talked about how important it was to get decision-making support from these stakeholders – especially those empowered to make high-level decisions.
As one evaluator shared, having regular formal stakeholder meetings leads to “lively discussion and different views on what people wanted out of [a] project”. More engagement from stakeholders means more support from teams.
What’s next?
Now, we’re taking what we’ve learned and acting on it. To start with, we’ll be making changes inside PHE to strengthen the culture of evaluation for digital health interventions.
Research has shown that evaluators don’t necessarily need an academic background: they could be part of a digital delivery team. So the first thing we want to build and test is a competency framework for evaluators – highlighting the skills they need to do evaluation.
Next, we’d like to trial a meetup for evaluators. This would promote collaboration across PHE – and perhaps other government bodies.
Finally, we plan to create practical guidance for teams on how much evaluation of a digital health intervention is likely to cost – and why. This will help people financing digital projects better understand the cost and timeframes of evaluation, and what their money will actually pay for.
We’re continuing to improve the evaluation of digital health interventions at PHE – and we want your feedback! We do regular show and tells so if you’d like to get involved, contact the project lead, Kassandra Karpathakis.