Inference-friendly deep generative models such as Variational Autoencoders have shown great success in modelling incomplete data. These models typically infer posteriors from the observed features and decode the latent variables to impute the missing features. Recent deep generative models are well suited for modelling structured data like images, sequences, or vector-valued numbers, and they use neural architectures specifically tailored to the data type. Unfortunately, using these networks for grid-type data necessitates pre-imputation methods, such as zero-filling missing patches, leading to biased inference.
In contrast, Implicit Neural Representations (INRs) model complex functions that map coordinates to features in a point-wise setting using feedforward neural networks, independently of the data type and structure. As a consequence, they infer knowledge only from observed points, thus overcoming the aforementioned bias. Although Markov Chain Monte Carlo (MCMC) methods have been widely used to improve inference in classical deep generative models of structured data, their effectiveness in models of INRs is still an open research question.
My proposed project aims to revolutionize deep generative modelling by leveraging the power of Implicit Neural Representations (INRs) to model incomplete data without introducing any bias. By i) creating novel deep generative models of INRs, and ii) proposing novel MCMC-based inference methods for these models, we can overcome the limitations of existing techniques and open new directions for using MCMC-based inference in generative models of INRs. These groundbreaking contributions have the potential to transform the field of deep generative modelling and have significant implications for how they handle missing data.