If you want to survive an avalanche, it turns out that one thing you’ll need in your survival kit is an algorithm.
The rise of digital social movements (some developing seemingly organically in social media, others orchestrated by organisations such as 38 Degrees) has led more people to use digital tools to connect with government more quickly and in ever greater numbers.
Greater participation should be good news for government - we want to hear what people think, because we make better policy when we get input from more people. Technology has made the act of participation in policy making a bit easier.
But for every webchat, every consultation form, and every petition shared on a social network, there’s someone on the other end who needs to read and make sense of these new digital inputs. Inboxes are filling up faster than policy makers can read the emails, and this trend seems likely to increase. So we’ve been thinking about how the digital team can support policy makers in this age of click democracy?
The answer may lie in the rising star of the 21st century, the algorithm. In particular, machine learning algorithms that can be trained to spot patterns and can be used on large datasets to help analyse unstructured text.
So here in DH we’ve been trialling a learning algorithm tool as a way to help analyse public consultation responses. In particular, we’ve just completed a controlled trial on a consultation that had a high volume of digital responses. We were able to compare how the algorithm performed against the more manual analysis.
The findings so far are intriguing. We found that the machine learning approach reinforced some of the findings of the manual approach, but also identified new insights from the consultation responses. We were more easily able to look horizontally across the data to see what issues were consistently raised across different questions. We were also more able to understand what different audiences segments were saying using the algorithmic approach.
It still took time, and required lots of human input. We found that the value of the insight increased the more data we fed it, and the more we helped it to learn.
For policy makers, digital avalanches should be a good thing, provided we have methods and the tools to cope. After all, we have a responsibility to listen to those who take the time to engage. If algorithms can help us do that, then the policy we make should be better as a result.
Comment by Philippa posted on
Great stuff. This is also helpful at minimising the risk of bias all humans bring to any manual analysis. Hope it's something government can scale up in the future!
Comment by Ray Barrass posted on
Interesting, intriguing but weary.
Comment by Anonymous posted on
Algorithms are only as good as the person who created it. They should not be considered the silver bullet that will get you all your answers. You still need the human element to review and interpret the responses. Sometimes the answers you are looking for can be found between the lines.