Interpretable Machine Learning (IML) / Explainable AI (XAI)
Machine learning models are often referred to as black boxes because their predictions are often intransparent and not easy to understand for humans. Numerous post-hoc methods from the field of interpretable machine learning have been developed in recent years to gain new insights into black box models and their underlying data. Furthermore, model interpretation helps to validate and debug models, which also contributes to a better understanding. In our research group, we explore and implement approaches for interpretable machine learning. Our research focus lies on model-agnostic methods for tabular data.
Focus Areas:
- Analysis of limitations of interpretation methods
- Connection between causality and model interpretation
- Connection between sensitivity analysis and model interpretation
- Counterfactual explanations
- Consolidation of state-of-the-art methods
- Interpretable boosting models
- Partial dependence plots and accumulated local effects
- Permutation feature importance
- Shapley values
Projects and Software
-
iml
: R package for model-agnostic interpretability methods. -
Book: Interpretable Machine Learning: An open source book that may serve as a guide for making black box models explainable.
Members
Name | Position | |||
---|---|---|---|---|
Dr. Giuseppe Casalicchio | PostDoc | |||
Gunnar König | PhD Student | |||
Christoph Molnar | PhD Student | |||
Christian Scholbeck | PhD Student | |||
Susanne Dandl | PhD Student | |||
Quay Au | PhD Student | |||
Julia Herbinger | PhD Student |
Contact
Feel free to contact us if you are looking for collaborations:
giuseppe.casalicchio [at] stat.uni-muenchen.de