Interpretable Machine Learning (IML) / Explainable AI (XAI)

Machine learning models are often referred to as black boxes because their predictions are often intransparent and not easy to understand for humans. Numerous post-hoc methods from the field of interpretable machine learning have been developed in recent years to gain new insights into black box models and their underlying data. Furthermore, model interpretation helps to validate and debug models, which also contributes to a better understanding. In our research group, we explore and implement approaches for interpretable machine learning. Our research focus lies on model-agnostic methods for tabular data.

Focus Areas:

Projects and Software

Members

Name       Position
Dr. Giuseppe Casalicchio       PostDoc
Gunnar König       PhD Student
Christoph Molnar       PhD Student
Christian Scholbeck       PhD Student
Susanne Dandl       PhD Student
Quay Au       PhD Student
Julia Herbinger       PhD Student

Contact

Feel free to contact us if you are looking for collaborations:

giuseppe.casalicchio [at] stat.uni-muenchen.de