Johanna Düngler

Position: Understanding the interaction of Privacy, Robustness, and Fairness in machine learning algorithms
Categories: Fellows, PhD Fellows 2024
Location: University of Copenhagen

Abstract:

Machine learning models and algorithms are increasingly integrated into critical decision-making processes across various domains such as healthcare, finance, criminal justice, and more. The decisions made by these models can have profound consequences on individuals and society. Due to this, Trustworthiness is a critical facet of contemporary Machine Learning research and applications. This project aims to focus on the notions of Robustness, Privacy, and Fairness within ML systems, exploring the trade-offs and dependencies among them.

Past works have shown an intricate relationship between those notions and balancing them often involves making compromises, as optimising one aspect may come at the expense of another. Starting from Cummings et al. [1], several works [2,3] were able to show there is a potential huge cost in accuracy when trying to achieve fairness and privacy simultaneously. Further balancing between different notions of fairless involves making decisions that optimise for one aspect while accepting trade-offs in the other [4]. With regards to robustness, a theoretical equivalence between robustness and privacy [5,6] was shown. However, this equivalence remains theoretical as it’s not computationally feasible to implement it in practice.
The first part of the project seeks to identify conditions on data and algorithms that enable the simultaneous achievement of privacy, robustness, and fairness. Additionally, we aim to quantify the minimum sacrifice in utility required for these notions to coexist harmoniously and design practical machine learning algorithms that successfully achieves this trade-off. Lastly, we are investigating whether relaxing any of these trustworthiness criteria can facilitate their coexistence with a reduced cost in utility. This project will be cosupervised by Amartya Sanyal and Rasmus Pagh, both at the Department of Computer Science of the University of Copenhagen.
The results of this project can significantly impact how we use machine learning algorithms in real-world settings. By understanding the limits of trustworthiness and quantifying the costs in terms of accuracy, vulnerability, and model training time incurred to enhance their trustworthiness, valuable insights can be provided to stakeholders.This includes policy makers, healthcare experts, and the public, allowing them to make informed choices when deploying ML systems, balancing the need for robustness with fairness, privacy, and utility considerations.