One of our main focus areas is methods for generating counterfactual explanations. Counterfactual explanations, a growing research area in artificial intelligence and machine learning, focus on generating hypothetical scenarios that reveal why a model made a particular decision. By exploring alternative inputs or model behaviors, we aim to enhance the transparency and interpretability of AI models, enabling users to better understand and trust their outcomes. This field holds promise for improving the accountability of AI applications across various domains, such as healthcare.
Demonstrator on Counterfactual Explanations for Differentially Private Support Vector Machines
In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2022, Grenoble, France, September 19–23, 2022, Proceedings, Part VI, 2023 |
|||||||||
Style-transfer counterfactual explanations: An application to mortality prevention of ICU patients
Artificial Intelligence in Medicine, 2023 |
|||||||||
JUICE: JUstIfied Counterfactual Explanations
In International Conference on Discovery Science, 2022 |
|||||||||
Learning Time Series Counterfactuals via Latent Space Representations
In International Conference on Discovery Science, 2021 |
|||||||||
Counterfactual Explanations for Survival Prediction of Cardiovascular
ICU Patients
In Artificial Intelligence in Medicine (AIME), 2021 |