Samuel Matthiesen

Position: Towards Uncertainty- and Geometry-aware Generative Models
Categories: Fellows, Postdoc Fellows 2024
Location: Technical University of Denmark

Abstract:

Representation learning aims to transform data to extract relevant information for other tasks. This problem can be understood as being encompassed by generative modelling, which learns a mapping from a latent representation into a data point. As machine learning becomes more pervasive, it is critical to quantify confidence in the model behaviour in fields such as life sciences, security, and general decision making. In this project, we aim to address current limitations of modern approaches to fundamental problems of representation learning and generative modelling. We consider two related lines of research. The first aims to scale Bayesian inference to modern problems of generative modelling, enabling a principled approach to evaluate model behaviour with uncertainty estimates. The second is concerned with the geometry of the latent spaces of those models, allowing us to properly inspect and operate on them. We expect those models to be robust when put in scenarios different from those represented by the training data, and to allow sound analyses of high-dimensional, complex data to be conducted within their latent spaces. Primarily, we consider Gaussian process latent variable models (GP-LVMs) for both lines of research. These models are uniquely able to be employed in similar tasks to modern neural networks and, under certain conditions, have closed-form formulas for the expected metric induced in the latent space, an advantage over neural networks. As a starting point for research on scalability, we consider Laplace approximations that scale linearly with data size. A promising way to bring GP-LVMs to a modern setting could involve a linearised Laplace approximation of an autoencoder, which is based on a neural network, effectively transforming the generative part (decoder) into a GP-LVM. Furthermore, we intend to explore how to make use of the expected metric for GP-LVMs for larger problems. Their closed-form formulas can be inefficient to work with. Recent advances by modern automatic differentiation engines are a promising avenue for solving this, as the Jacobian of the model is usually needed for constructing the expected metric tensor. This requires careful reformulation of common operations on Riemannian manifolds. Together, the proposed lines of investigation aim to build scalable, uncertainty-aware generative models whose latent spaces are geometrically well-understood.