Denmark has recently held a general election, and in a previous election campaign, 62% of Danish voters used Voting Advice Applications (VAAs), known in Danish as mediernes kandidat-tests. With nearly half of users ultimately voting in line with VAA recommendations, the responsibility to ensure these tools are trustworthy is considerable. 

In a new, as yet unpublished study, Professor Roberta Sinatra of the University of Copenhagen – and a member of DDSA’s Education & Networking Committee – together with colleagues Vedran Sekara, Associate Professor at the IT University of Copenhagen, and Giovanni Astante, a PhD student at the University of Zurich, has examined the algorithms behind a popular Danish VAA. 

Our research shows that VAAs can have a substantial impact on election outcomes and, consequently, on society. The algorithms behind these tools can generate outputs or predictions that influence our decisions – or even make decisions on our behalf. This can have far-reaching societal consequences. However, algorithms may be biased, based on questionable assumptions, or lack robustness, meaning we cannot take for granted that they function reliably or safely,” Roberta Sinatra explains. 

Professor Sinatra and Vedran Sekara, who led the research project, have long focused on auditing algorithms and evaluating measures with significant societal impact, particularly in relation to fairness. 

Data science plays a central role in their work: 

When auditing an algorithm, one must effectively ‘reverse engineer’ the process that led to its creation. In this case, the algorithm was data-driven, so we applied our data science expertise to understand how the VAA algorithm was developed. We also designed perturbations to test the robustness of its rankings, generated data representing hypothetical users, and compared user responses with those provided by political candidates.” 

The VAA’s are not robust inside – they can be disturbed 

Their key finding is that these algorithms are not robust and can be influenced by small changes: 

We found that the algorithms are brittle. This means that minor adjustments –reflecting the inevitable assumptions and arbitrary choices made during their design – can lead to different voting recommendations. Our results show that the suggested candidate can change up to 35% of the time, and the recommended party up to 15% of the time.” 

This lack of robustness has clear societal implications: 

The outputs of algorithms can shape people’s behaviour. For instance, VAA results may influence voting decisions. When such tools are used at scale, they have the potential to significantly affect election outcomes,” the professor notes. 

Professor Sinatra therefore recommends that voters treat VAA results with caution: 

Their recommendations may not be reliable. For example, users should consider taking the test multiple times and slightly adjusting their answers – particularly where they are uncertain – to see whether the results change. If the VAA provides a ranking rather than a single recommendation, it is also advisable to consider the top three options equally.” 

She also offers recommendations for media organisations that host VAAs: 

Systems should be independently audited, reliance on simplistic quantitative metrics should be reduced, and users should be presented with multiple potential matches. We also recommend including more than 25 questions when assessing alignment between users and candidates, as a larger number of questions helps to reduce volatility.” 

DDSA