Fiona Katharina Ewald

About

I am part of the Interpretable Machine Learning / Explainable AI research group, where my work focuses on developing methods that help uncover true patterns in data. Specifically, I study global feature importance techniques that rank features according to their influence on model predictions across the entire input space. My focus lies on loss-based approaches, which evaluate feature relevance by measuring performance degradation when features are altered or removed. These methods aim to go beyond model-specific explanations and provide more faithful insights into the underlying data-generating process. Recently, I have become particularly interested in the concept of Rashomon Sets—a set of models that achieve nearly optimal performance. By analyzing these sets, I aim to better understand the robustness and ambiguity of learned patterns, as well as to identify which features consistently matter across a range of plausible models.

Contact

Institut für Statistik

Ludwig-Maximilians-Universität München

Ludwigstraße 33

D-80539 München

fiona.ewald [at] stat.uni-muenchen.de

Research Interests

You can find me on

References

  1. Ewald FK, Binder M, Feurer M, Bischl B, Casalicchio G (2026) CASHomon Sets: Efficient Rashomon Sets Across Multiple Model Classes and their Hyperparameters. arXiv preprint arXiv:2603.15321.
    link|pdf
    .
  2. Burk L, Ewald FK, Casalicchio G, Wright MN, Bischl B (2026) xplainfi: Feature Importance and Statistical Inference for Machine Learning in R. arXiv preprint arXiv:2603.15306.
    link|pdf
    .
  3. Dandl S, Ewald FK, Valero-Leal E, Bischl B, Blesch K (2025) Technical Considerations for XAI in AI Governance EurIPS 2025 Workshop on Private AI Governance,
    link|pdf
    .
  4. Ewald FK, Bothmann L, Wright MN, Bischl B, Casalicchio G, König G (2024) A Guide to Feature Importance Methods for Scientific Inference. In: In: Longo L , In: Lapuschkin S , In: Seifert C (eds) Explainable Artificial Intelligence, pp. 440–464. Springer Nature Switzerland, Cham.
    link|pdf
    .
  5. Herbinger J, Dandl S, Ewald FK, Loibl S, Casalicchio G (2024) Leveraging Model-Based Trees as Interpretable Surrogate Models for Model Distillation. In: In: Nowaczyk S , In: Biecek P , In: Chung NC , In: Vallati M , In: Skruch P , In: Jaworek-Korjakowska J , In: Parkinson S , In: Nikitas A , In: Atzmüller M , In: Kliegr T , In: Schmid U , In: Bobek S , In: Lavrac N , In: Peeters M , In: Dierendonck R van , In: Robben S , In: Mercier-Laurent E , In: Kayakutlu G , In: Owoc ML , In: Mason K , In: Wahid A , In: Bruno P , In: Calimeri F , In: Cauteruccio F , In: Terracina G , In: Wolter D , In: Leidner JL , In: Kohlhase M , In: Dimitrova V (eds) Artificial Intelligence. ECAI 2023 International Workshops, pp. 232–249. Springer Nature Switzerland.
    link|pdf
    .