This is a talk given by Prof Daniel Kuhn as part of the Online Seminar Series Machine Learning NeEDS Mathematical Optimization, https://congreso.us.es/mlneedsmo/, branding the role of Operations Research in Artificial Intelligence with the support of EURO, https://www.euro-online.org.
The last decade has witnessed a surge of algorithms that have a consequential impact on our daily lives. Machine learning methods are increasingly used, for example, to decide whom to grant or deny loans, college admission, bail or parole. Even though it would be natural to expect that algorithms are free of prejudice, it turns out that cutting-edge AI techniques can learn or even amplify human biases and may thus be far from fair. Accordingly, a key challenge in automated decision-making is to ensure that individuals of different demographic groups have equal chances of securing beneficial outcomes. In this talk we first highlight the difficulties of defining fairness criteria, and we show that a naive use of popular fairness constraints can have undesired consequences. We then characterize situations in which fairness constraints or unfairness penalties have a regularizing effect and may thus improve out-of-sample performance. We also identify a class of unfairness-measures that are susceptible to efficient stochastic gradient descent algorithms, and we propose a statistical hypothesis test for fairness.
Link to the talk: https://eu.bbcollab.com/guest/6e6638bbdd854181b21637e1e8c02e43
The organizers of the Online Seminar Series Machine Learning NeEDS Mathematical Optimization
Emilio Carrizosa, IMUS-Instituto de Matemáticas de la Universidad de Sevilla
Nuria Gómez-Vargas, IMUS-Instituto de Matemáticas de la Universidad de Sevilla
Thomas Halskov, Copenhagen Business School
Dolores Romero Morales, Copenhagen Business School