Skip to main content

https://digitalhealth.blog.gov.uk/2018/07/27/guest-post-leading-the-way-in-evaluating-digital-public-health/

Guest post: leading the way in evaluating digital public health

Posted by: , Posted on: - Categories: Insight, User research

I think clinicians, patients and commissioners have a right to expect a level of evidence for digital tools that is equivalent to the level of evidence for other interventions that we use in healthcare.

– Clinical academic, London

This year Public Health England (PHE) kicked off its Digital Transformation Programme. These ambitious projects aim to further protect and improve the nation’s health by developing new solutions to public health problems incorporating digital technologies.

As mentioned in a recent NHS England transformation blog, digital health innovations and technology are developing at an ever-increasing pace – and while some of these are excellent, others fall short of the mark.

As the national public health body, PHE should only develop or endorse digital interventions that demonstrate value and are both effective and safe. But how can we measure the effectiveness, safety and value of digital public health?

In April, PHE began an innovative collaboration, bringing together experts from across the health system to answer this difficult question. This project taps into work currently being undertaken by NHS England and the National Institute for Health and Care Excellence (NICE), as well as research at University College London and King's College London.

PHE, however, has gone one step further into the realm of digital-styled problem solving by opting to approach this difficult question using design thinking and user-centred research methods.

Determining whether or not a digital public health intervention meets the intended health outcomes, or adds value to a person’s life, is like opening Pandora’s box. Evaluation can be approached from multiple and often overlapping perspectives, including clinical, economic and digital:

You have a set of [evaluation] tools and paradigms that are going to be suitable for different types of problems… and you have to be careful about finding the right thing and not trying to squeeze a round peg into a square hole.

– Academic, Edinburgh

In the first phase of PHE’s project, we dived deep into the evaluation perspectives of academics, digital professionals and public health experts.

We spoke with 15 people who develop or commission digital public health products or services. They shared their experiences of evaluating the evidence for an intervention’s effectiveness:

You can’t just demonstrate behaviour change because you’re getting so much data. I think from the [physical fitness] app, we’ve got 8 billion rows of data because of how much information is being passed from the app to the databases.

– Digital professional, London

You have to find a balance between what’s practically doable and what you as an academic maybe would do [when evaluating a digital health product]… so you need people who understand those limitations and aren’t stuck on what you would probably do as an academic.

– Public health professional, Geneva

They also shared their experiences of the types of evidence currently used to prove the effectiveness and value of digital public health interventions. As illustrated in the quotes below, the types of evidence ranged from measuring the number of people downloading a digital health product to assessing a product or service’s political and cost implications:

And when it came to reporting on [digital public health] campaigns… the very headline-grabbing things, like a million downloads [is reported]. Or 2 million people have visited a website, and such.

– Digital professional, London

I mean, traditional medical evaluation is to look for outcomes… But that’s only a small part of what we are looking at. We need to look at also some of the political and cost aspects of this as well.

– Public health professional, Geneva

Going into this project, we expected a big clash between the distinct perspectives of the academic, digital professional, and public health expert. But interestingly, this wasn’t the case overall.

Our research did uncover some clashes, including the use of language in multidisciplinary teams ('implementation' versus 'deployment'), organisations working to conflicting timescales, and the challenge and appropriateness of the randomised control trial paradigm for digital public health innovations:

I think the randomised control paradigm has been really challenging for the… public health field because we’re running ourselves up against a clinical way of evaluating interventions.

– Public health professional, London

But for the most part, we found a surprising convergence of views on the evaluation of digital public health. This convergence mirrors the growing intersection between innovative, disruptive technology and the traditional healthcare field that digital public health occupies.

The professionals we spoke with were already asking the question, “How do you bring together these distinct perspectives when you evaluate a digital public health intervention?” Each was actively trying to find the middle ground and learn from colleagues working in digital health, whatever their specific field.

As stated by a London-based academic in digital health, there is a growing acceptance and culture of evaluation:

I think in this new culture of more accountability, especially with Facebook and all the rest of it, I think there’s much more acceptance on the part of the business community and innovators that you do need to evaluate and report your results. But it is just a light touch in some things and a much more rigorous touch in others.

– Academic, London

By the end of this phase, several concepts emerged from our research into how digital public health interventions are currently evaluated, and the experiences of people working in the field, including:

  • embedding evaluation thinking, skills and tools into the design of digital public health interventions right from the start
  • setting up strategic relationships with suppliers, allowing for flexibility in contracted deliverables based on the evaluation of the product or service
  • setting the standard of evaluation metrics for public health areas to promote collaboration in the public and private market, which may, in future, allow for longitudinal analysis of digital public health interventions

Our next step at PHE is to deepen our understanding of the current evaluation landscape and let these concepts inform our vision of how digital public health innovations could be evaluated in future.

We intend to contribute to the development of best practice for evaluating digital public health interventions. At the same time, we will build the capability within PHE and the wider market to incorporate meaningful evaluation into the design and iterative development of digital public health interventions.

To learn more about this project, get in touch with the project lead, Kassandra Karpathakis (kassandra.karpathakis@phe.gov.uk), or the project sponsor Felix Greaves (felix.greaves@phe.gov.uk).

Sharing and comments

Share this page