Anders Gjølbye Madsen

Position: Causal Approach to Trustworthy Artificial Intelligence in Healthcare
Categories: Fellows, PhD Fellows 2024
Location: Technical University of Denmark

Abstract:

The proposed PhD project aims to enhance the trustworthiness and interpretability of machine learning models in healthcare by leveraging causal approaches. It addresses the need for models with human-interpretable, causal handles to ensure AI systems are reliable and understandable to professionals. The project proposes to investigate and develop methods to distinguish between predictive and explanatory features in ML models, emphasizing the importance of causality in model interpretability.

Methodologically, the research will explore both the enhancement of existing models and the refinement of post hoc interpretation analysis. A notable direction involves extending the approach presented by Haufe et al. (2014) for linear models to the nonlinear context of deep learning, specifically targeting the discrepancy between predictive accuracy and meaningful explanations. Moreover, the project proposes to build upon established explainable AI (XAI) methods, such as LIME, incorporating Haufe et al.’s approach to improve LIME’s ability for causal explanations. An initial practical step will also revisit the applicability of the TCAV method to neuroimaging foundation models, aiming to further validate the use of human-interpretable concepts in a Concept Bottleneck Model setting.
The significance of this research lies in its potential to close the trust gap in AI applications within healthcare. By ensuring that AI systems are both reliable and understandable, the project supports the goal of making AI in healthcare settings a valuable tool for professionals, ultimately leading to better patient outcomes. The theoretical advancements anticipated from this research will provide a foundation for future exploration into learning causal concepts for foundation models, with a long-term vision of facilitating personalized care through inherently causal deep learning models.
Data for this project will be sourced from an exclusive agreement with BrainCapture, providing annotated EEG scans from clinical studies, alongside publicly available large-scale EEG datasets and institutional resources. This rich data foundation will support the empirical aspects of the research, allowing for the validation and refinement of proposed methodologies in real-world scenarios.