Interpretable Machine Learning (IML) / Explainable AI (XAI)

Machine learning models are often referred to as black boxes because their predictions are often intransparent and not easy to understand for humans. Numerous post-hoc methods from the field of interpretable machine learning have been developed in recent years to gain new insights into black box models and their underlying data. Furthermore, model interpretation helps to validate and debug models, which also contributes to a better understanding. In our research group, we explore and implement approaches for interpretable machine learning. Our research focus lies on model-agnostic methods for tabular data.

Focus Areas:

Projects and Software

Members

Name       Position
Dr. Giuseppe Casalicchio       PostDoc / Lead
Gunnar König       PhD Student
Christian Scholbeck       PhD Student
Susanne Dandl       PhD Student
Julia Herbinger       PhD Student
Fiona Katharina Ewald       PhD Student

Former Members

Name       Year
Christoph Molnar       2017 – 2021
Quay Au       2017 – 2020

Publications

  1. Herbinger J, Bischl B, Casalicchio G (2022) REPID: Regional Effect Plots with implicit Interaction Detection. International Conference on Artificial Intelligence and Statistics (AISTATS) 25.
    link | pdf
    .
  2. Scholbeck CA, Casalicchio G, Molnar C, Bischl B, Heumann C (2022) Marginal Effects for Non-Linear Prediction Functions.
    link
    .
  3. König G, Freiesleben T, Bischl B, Casalicchio G, Grosse-Wentrup M (2021) Decomposition of Global Feature Importance into Direct and Associative Components (DEDACT).
    link
    .
  4. König G, Molnar C, Bischl B, Grosse-Wentrup M (2021) Relative Feature Importance 2020 25th International Conference on Pattern Recognition (ICPR), pp. 9318–9325.
    link | pdf
    .
  5. Molnar C, Freiesleben T, König G, Casalicchio G, Wright MN, Bischl B (2021) Relating the Partial Dependence Plot and Permutation Feature Importance to the Data Generating Process. arXiv preprint arXiv:2109.01433.
    link | pdf
    .
  6. Moosbauer J, Herbinger J, Casalicchio G, Lindauer M, Bischl B (2021) Explaining Hyperparameter Optimization via Partial Dependence Plots. Advances in Neural Information Processing Systems (NeurIPS 2021) 34.
    link | pdf
    .
  7. Moosbauer J, Herbinger J, Casalicchio G, Lindauer M, Bischl B (2021) Towards Explaining Hyperparameter Optimization via Partial Dependence Plots 8th ICML Workshop on Automated Machine Learning (AutoML),
    link | pdf
    .
  8. Au Q, Herbinger J, Stachl C, Bischl B, Casalicchio G (2021) Grouped Feature Importance and Combined Features Effect Plot. arXiv preprint arXiv:2104.11688.
    link | pdf
    .
  9. Dandl S, Molnar C, Binder M, Bischl B (2020) Multi-Objective Counterfactual Explanations. In: In: Bäck T , In: Preuss M , In: Deutz A et al. (eds) Parallel Problem Solving from Nature – PPSN XVI, pp. 448–469. Springer International Publishing, Cham.
    link
    .
  10. Scholbeck CA, Molnar C, Heumann C, Bischl B, Casalicchio G (2020) Sampling, Intervention, Prediction, Aggregation: A Generalized Framework for Model-Agnostic Interpretations. In: In: Cellier P , In: Driessens K (eds) Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2019, pp. 205–216. Springer International Publishing, Cham.
    link | pdf
    .
  11. Molnar C, König G, Bischl B, Casalicchio G (2020) Model-agnostic Feature Importance and Effects with Dependent Features–A Conditional Subgroup Approach. arXiv preprint arXiv:2006.04628.
    link | pdf
    .
  12. Molnar C, König G, Herbinger J et al. (2020) Pitfalls to Avoid when Interpreting Machine Learning Models. ICML workshop XXAI: Extending Explainable AI Beyond Deep Models and Classifiers.
    link | pdf
    .
  13. Molnar C, Casalicchio G, Bischl B (2020) Quantifying Model Complexity via Functional Decomposition for Better Post-hoc Interpretability. In: In: Cellier P , In: Driessens K (eds) Machine Learning and Knowledge Discovery in Databases, pp. 193–204. Springer International Publishing, Cham. link | pdf.
  14. Molnar C, Casalicchio G, Bischl B (2020) Interpretable Machine Learning – A Brief History, State-of-the-Art and Challenges. In: In: Koprinska I , In: Kamp M , In: Appice A et al. (eds) ECML PKDD 2020 Workshops, pp. 417–431. Springer International Publishing, Cham.
    link | pdf
    .
  15. Liew B, Rügamer D, De Nunzio A, Falla D (2020) Interpretable machine learning models for classifying low back pain status using functional physiological variables. European Spine Journal 29, 1845–1859.
    link
    .
  16. Casalicchio G, Molnar C, Bischl B (2019) Visualizing the Feature Importance for Black Box Models. In: In: Berlingerio M , In: Bonchi F , In: Gärtner T , In: Hurley N , In: Ifrim G (eds) Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2018, pp. 655–670. Springer International Publishing, Cham.
    link | pdf
    .
  17. Molnar C, Casalicchio G, Bischl B (2018) iml: An R package for Interpretable Machine Learning. The Journal of Open Source Software 3, 786. link | pdf.
  18. Casalicchio G, Bischl B, Boulesteix A-L, Schmid M (2015) The residual-based predictiveness curve: A visual tool to assess the performance of prediction models. Biometrics 72, 392–401.
    link | pdf
    .

Contact

Feel free to contact us if you are looking for collaborations:

giuseppe.casalicchio [at] stat.uni-muenchen.de