Model explainability and Fairness


Description

We focus on methods for generating interpretable and explainable models, such as locally explainable models, and rule learning. Interpretability has gained more attention in recent years, since data science methods and models have started to be used to larger extend in both industry and society as a whole. It can be quantified at the model level, i.e., by providing a description of the whole model to the human or by instances, i.e., explaining for each decision the reasons and motivate behind the decision. There exist lot of aspects on models that relate to interpretability, such as model stability and size, dimensionality reduction, and visualization.


Latest publications

  • Measuring the Burden of (Un) fairness Using Counterfactuals
    Kuratomi, Alejandro, Pitoura, Evaggelia, Papapetrou, Panagiotis, Lindgren, Tony, and Tsaparas, Panayiotis
    In Machine Learning and Principles and Practice of Knowledge Discovery in Databases: International Workshops of ECML PKDD 2022, Grenoble, France, September 19–23, 2022, Proceedings, Part I, 2023
    Post-Hoc Explainability for Time Series Classification: Toward a signal processing perspective
    Mochaourab, Rami, Venkitaraman, Arun, Samsten, Isak, Papapetrou, Panagiotis, and Rojas, Cristian R.
    IEEE Signal Processing Magazine, 2022
    Evaluating Local Interpretable Model-Agnostic Explanations on Clinical Machine Learning Classification Models
    Barr Kumarakulasinghe, Nesaretnam, Blomberg, Tobias, Liu, Jintai, Saraiva Leao, Alexandra, and Papapetrou, Panagiotis
    In International Symposium on Computer-Based Medical Systems (CBMS), 2020
    Example-Based Feature Tweaking Using Random Forests
    Lindgren, Tony, Papapetrou, Panagiotis, Samsten, Isak, and Asker, Lars
    In International Conference on Information Reuse and Integration for Data Science (IRI), 2019
    Explainable Predictions of Adverse Drug Events from Electronic Health Records Via Oracle Coaching
    Crielaard, Loes, and Papapetrou, Panagiotis
    In International Conference on Data Mining Workshops (ICDMW), 2018


    Implementations


    People

    Panagiotis Papapetrou, Professor
    sequential and temporal data mining, explainability, healthcare applications
    Tony Lindgren, Associate Professor
    explainability, predictive maintanance
    Isak Samsten, Senior Lecturer
    explainability, temporal data mining, fintech
    Ioanna Miliou, Senior Lecturer
    nowcasting and forecasting, data science for social good with applications in healthcare, epidemics and peace
    Alejandro Kuratomi
    interpretable models with statistical guarantees
    Maria Movin
    multi-objective learning for modeling user preferences (Spotify)
    Franco Rugolon
    explainable machine learning for healthcare