Research

Research groups

Dissertations

You can find a list of thesis examined by Bernd Bischl at the LMU Munich on the website of the University Library.

Publications

A full list of publications in BibTex format is available here

[2024, 2023, 2022, 2021, 2020, 2019, 2018, 2017, 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007]

2024

  1. Sommer, E., Wimmer, L., Papamarkou, T., Bothmann, L., Bischl, B., & Rügamer, D. (2024, July 26). Connecting the Dots: Is Mode-Connectedness the Key to Feasible Sample-Based Inference in Bayesian Neural Networks? Proceedings of the 41st International Conference on Machine Learning.
    link|pdf
  2. Rügamer, D., Kolb, C., Weber, T., Kook, L., & Nagler, T. (2024, July 23). Generalizing Orthogonalization for Models with Non-linearities. Accepted for Publication at the 41st International Conference on Machine Learning.
  3. Lindauer, M., Karl, F., Klier, A., Moosbauer, J., Tornede, A., Müller, A. C., Hutter, F., Feurer, M., & Bischl, B. (2024, July 22). Position: A Call to Action for a Human-Centered AutoML Paradigm. Accepted for Publication at the 41st International Conference on Machine Learning (ICML).
    link|pdf
  4. Herrmann, M., Lange, F. J. D., Eggensperger, K., Casalicchio, G., Wever, M., Feurer, M., Rügamer, D., Hüllermeier, E., Boulesteix, A.-L., & Bischl, B. (2024, July 22). Position: Why We Must Rethink Empirical Research in Machine Learning. Accepted for Publication at the 41st International Conference on Machine Learning.
    link|pdf
  5. Papamarkou, T., Skoularidou, M., Palla, K., Aitchison, L., Arbel, J., Dunson, D., Filippone, M., Fortuin, V., Hennig, P., Hubin, A., Immer, A., Karaletsos, T., Khan, M. E., Kristiadi, A., Li, Y., Lobato, J. M. H., Mandt, S., Nemeth, C., Osborne, M. A., … Zhang, R. (2024, July 21). Position Paper: Bayesian Deep Learning in the Age of Large-Scale AI. Proceedings of the 41st International Conference on Machine Learning.
    link|pdf
  6. Bothmann, L., & Peters, K. (2024). Fairness von KI – ein Brückenschlag zwischen Philosophie und Maschinellem Lernen. In B. Rathgeber & M. Maier (Eds.), Grenzen Künstlicher Intelligenz.
  7. Kohli, R., Feurer, M., Bischl, B., Eggensperger, K., & Hutter, F. (2024, May 11). Towards Quantifying the Effect of Datasets for Benchmarking: A Look at Tabular Machine Learning. Data-Centric Machine Learning (DMLR) Workshop at the International Conference on Learning Representations (ICLR).
  8. Ronval, B., Nijssen, S., & Bothmann, L. (2024, May 7). Can generative AI-based data balancing mitigate unfairness issues in Machine Learning? EWAF’24: European Workshop on Algorithmic Fairness.
  9. Kook, L., Kolb, C., Schiele, P., Dold, D., Arpogaus, M., Fritz, C., Baumann, P., Kopper, P., Pielok, T., Dorigatti, E., & Rügamer, D. (2024, April 26). How Inverse Conditional Flows Can Serve as a Substitute for Distributional Regression. Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence.
  10. Dandl, S., Becker, M., Bischl, B., Casalicchio, G., & Bothmann, L. (2024). mlr3summary: Concise and interpretable summaries for machine learning models. Proceedings of the Demo Track of the 2nd World Conference on EXplainable Artificial Intelligence.
    link|pdf
  11. Rundel, D., Kobialka, J., von Crailsheim, C., Feurer, M., Nagler, T., & Rügamer, D. (2024, April 8). Interpretable Machine Learning for TabPFN. 2nd World Conference on EXplainable Artificial Intelligence.
    link|pdf
  12. Kook, L., Baumann, P. F. M., Dürr, O., Sick, B., & Rügamer, D. (2024). Estimating Conditional Distributions with Neural Networks using R package deeptrafo. Journal of Statistical Software.
    link|pdf
  13. Kopper, P., Rügamer, D., Sonabend, R., Bischl, B., & Bender, A. (2024). Training Survival Models using Scoring Rules.
    link|pdf
  14. Dandl, S., Blesch, K., Freiesleben, T., König, G., Kapar, J., Bischl, B., & Wright, M. (2024, March 15). CountARFactuals – Generating plausible model-agnostic counterfactual explanations with adversarial random forests. 2nd World Conference on EXplainable Artificial Intelligence.
    link|pdf
  15. Weerts, H., Pfisterer, F., Feurer, M., Eggensperger, K., Bergman, E., Awad, N., Vanschoren, J., Pechenizkiy, M., Bischl, B., & Hutter, F. (2024). Can Fairness be Automated? Guidelines and Opportunities for Fairness-aware AutoML. Journal of Artificial Intelligence Research, 79, 639–677.
    link|pdf
  16. Liew, B. X. W., Pfisterer, F., Rügamer, D., & Zhai, X. (2024). Strategies to optimise machine learning classification performance when using biomechanical features. Journal of Biomechanics, 111998.
    link
  17. Bothmann, L., & Peters, K. (2024). Fairness als Qualitätskriterium im Maschinellen Lernen – Rekonstruktion des philosophischen Konzepts und Implikationen für die Nutzung außergesetzlicher Merkmale bei qualifizierten Mietspiegeln. AStA Wirtschafts- Und Sozialstatistisches Archiv.
  18. Dandl, S., Haslinger, C., Hothorn, T., Seibold, H., Sverdrup, E., Wager, S., & Zeileis, A. (2024). What Makes Forest-Based Heterogeneous Treatment Effect Estimators Work? The Annals of Applied Statistics, 18(1), 506–528.
    link
  19. Bothmann, L., Peters, K., & Bischl, B. (2024). What Is Fairness? On the Role of Protected Attributes and Fictitious Worlds. ArXiv:2205.09622 [Cs, Stat].
    link
  20. Rügamer, D. (2024, January 20). Scalable Higher-Order Tensor Product Spline Models. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics.
  21. Dold, D., Rügamer, D., Sick, B., & Dürr, O. (2024, January 20). Semi-Structured Subspace Inference. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics.
  22. Dandl, S., Bender, A., & Hothorn, T. (2024). Heterogeneous treatment effect estimation for observational data using model-based forests. Statistical Methods in Medical Research, 33(3), 392–413. https://doi.org/10.1177/09622802231224628
  23. Sale, Y., Hofman, P., Löhr, T., Wimmer, L., Nagler, T., & Hüllermeier, E. (2024). Label-wise Aleatoric and Epistemic Uncertainty Quantification. 40th Conference on Uncertainty in Artificial Intelligence (UAI).
    link|pdf
  24. Wiese, J. G., Wimmer, L., Papamarkou, T., Bischl, B., Günnemann, S., & Rügamer, D. (2024). Towards Efficient MCMC Sampling in Bayesian Neural Networks by Exploiting Symmetry (Extended Abstract). 33rd International Joint Conferences on Artificial Intelligence (IJCAI).
    link|pdf
  25. Nagler, T., Schneider, L., Bischl, B., & Feurer, M. (2024). Reshuffling Resampling Splits Can Improve Generalization of Hyperparameter Optimization. ArXiv:2405.15393 [Stat.ML].
    arXiv | PDF | Code
  26. Ewald, F. K., Bothmann, L., Wright, M. N., Bischl, B., Casalicchio, G., & König, G. (2024). A Guide to Feature Importance Methods for Scientific Inference. 2nd World Conference on EXplainable Artificial Intelligence.
    link|pdf
  27. Schalk, D., Bischl, B., & Rügamer, D. (2024). Privacy-Preserving and Lossless Distributed Estimation of High-Dimensional Generalized Additive Mixed Models. Statistics & Computing, 34(31).
    link|pdf
  28. Weber, T., Ingrisch, M., Bischl, B., & Rügamer, D. (2024, January). Constrained Probabilistic Mask Learning for Task-specific Undersampled MRI Reconstruction. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV).
    link|pdf
  29. Solderer, A., Hicklin, S., Aßenmacher, M., Ender, A., & Schmidlin, P. (2024). Influence of an allogenic collagen scaffold on implant sites with thin supracrestal tissue height: a randomized clinical trial. Accepted to Clinical Oral Investigations.
  30. Mayer, L., Heumann, C., & Aßenmacher, M. (2024). Can OpenSource beat ChatGPT? - A Comparative Study of Large Language Models for Text-to-Code Generation. Accepted at the Swiss Text Analytics Conference 2024.
  31. Aßenmacher, M., Sauter, N., & Heumann, C. (2024). Classifying multilingual party manifestos: Domain transfer across country, time, and genre. Accepted at the Swiss Text Analytics Conference 2024.
    link|pdf
  32. Deiseroth, B., Meuer, M., Gritsch, N., Eichenberg, C., Schramowski, P., Aßenmacher, M., & Kersting, K. (2024). Divergent Token Metrics: Measuring degradation to prune away LLM components – and optimize quantization. Accepted at the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
    link|pdf
  33. Gruber, C., Hechinger, K., Aßenmacher, M., Kauermann, G., & Plank, B. (2024). More Labels or Cases? Assessing Label Variation in Natural Language Inference. Proceedings of the Third Workshop on Understanding Implicit and Underspecified Language, 22–32.
    link|pdf
  34. Herbinger, J., Dandl, S., Ewald, F. K., Loibl, S., & Casalicchio, G. (2024). Leveraging Model-Based Trees as Interpretable Surrogate Models for Model Distillation. In S. Nowaczyk, P. Biecek, N. C. Chung, M. Vallati, P. Skruch, J. Jaworek-Korjakowska, S. Parkinson, A. Nikitas, M. Atzmüller, T. Kliegr, U. Schmid, S. Bobek, N. Lavrac, M. Peeters, R. van Dierendonck, S. Robben, E. Mercier-Laurent, G. Kayakutlu, M. L. Owoc, … V. Dimitrova (Eds.), Artificial Intelligence. ECAI 2023 International Workshops (pp. 232–249). Springer Nature Switzerland.
    link | pdf

2023

  1. Liew, B. X. W., Rügamer, D., & Birn-Jeffery, A. (2023). Neuromechanical stabilisation of the centre of mass during running. Gait & Posture.
  2. Weber, T., Ingrisch, M., Bischl, B., & Rügamer, D. (2023). Unreading Race: Purging Protected Features from Chest X-ray Embeddings. ArXiv:2311.01349.
    link|pdf
  3. Bothmann, L., Dandl, S., & Schomaker, M. (2023, October 25). Causal Fair Machine Learning via Rank-Preserving Interventional Distributions. Proceedings of the 1st Workshop on Fairness and Bias in AI Co-Located with 26th European Conference on Artificial Intelligence (ECAI 2023).
    link|pdf
  4. Rügamer, D., Pfisterer, F., Bischl, B., & Grün, B. (2023). Mixture of Experts Distributional Regression: Implementation Using Robust Estimation with Adaptive First-order Methods. AStA Advances in Statistical Analysis.
    link|pdf
  5. Hornung, R., Nalenz, M., Schneider, L., Bender, A., Bothmann, L., Bischl, B., Augustin, T., & Boulesteix, A.-L. (2023). Evaluating Machine Learning Models in Non-Standard Settings: An Overview and New Findings. ArXiv:2310.15108 [Cs, Stat].
    link | pdf
  6. Zhang, Z., Yang, H., Ma, B., Rügamer, D., & Nie, E. (2023). Baby’s CoThought: Leveraging Large Language Models for Enhanced Reasoning in Compact Models.
  7. Jeblick, K., Schachtner, B., Dexl, J., Mittermeier, A., Stüber, A. T., Topalis, J., Weber, T., Wesp, P., Sabel, B., Ricke, J., & Ingrisch, M. (2023). ChatGPT Makes Medicine Easy to Swallow: An Exploratory Case Study on Simplified Radiology Reports. European Radiology.
    link|pdf
  8. Liew, B. X. W., Kovacs, F. M., Rügamer, D., & Royuela, A. (2023). Automatic Variable Selection Algorithms in Prognostic Factor Research in Neck Pain. Journal of Clinical Medicine, 12(19).
  9. Ott, F., Rügamer, D., Heublein, L., Bischl, B., & Mutschler, C. (2023). Auxiliary Cross-Modal Representation Learning With Triplet Loss Functions for Online Handwriting Recognition. IEEE Access, 11, 94148–94172. https://doi.org/10.1109/ACCESS.2023.3310819
  10. Bothmann, L., Wimmer, L., Charrakh, O., Weber, T., Edelhoff, H., Peters, W., Nguyen, H., Benjamin, C., & Menzel, A. (2023). Automated wildlife image classification: An active learning tool for ecological applications. Ecological Informatics, 77(102231).
    link|pdf
  11. Kolb, C., Müller, C. L., Bischl, B., & Rügamer, D. (2023). Smoothing the Edges: A General Framework for Smooth Optimization in Sparse Regularization using Hadamard Overparametrization. ArXiv Preprint ArXiv:2307.03571.
    link|pdf
  12. Liew, B. X. W., Rügamer, D., Mei, Q., Altai, Z., Zhu, X., Zhai, X., & Cortes, N. (2023). Smooth and accurate predictions of joint contact force timeseries in gait using overparameterised deep neural networks. Frontiers in Bioengineering and Biotechnology: Biomechanics.
  13. Kolb, C., Bischl, B., Müller, C. L., & Rügamer, D. (2023, July 1). Sparse Modality Regression. Proceedings of the 37th International Workshop on Statistical Modelling, IWSM 2023.
    link|pdf
  14. Wiese, J. G., Wimmer, L., Papamarkou, T., Bischl, B., Günnemann, S., & Rügamer, D. (2023, June 6). Towards Efficient MCMC Sampling in Bayesian Neural Networks by Exploiting Symmetry. Machine Learning and Knowledge Discovery in Databases (ECML-PKDD).
    link|pdf
  15. Rügamer, D. (2023). A New PHO-rmula for Improved Performance of Semi-Structured Networks. ICML 2023.
  16. Ott, F., Heublein, L., Rügamer, D., Bischl, B., & Mutschler, C. (2023). Fusing Structure from Motion and Simulation-Augmented Pose Regression from Optical Flow for Challenging Indoor Environments. ArXiv:2304.07250.
    link|pdf
  17. Rath, K., Rügamer, D., Bischl, B., von Toussaint, U., & Albert, C. (2023). Dependent state space Student-t processes for imputation and data augmentation in plasma diagnostics. Contributions to Plasma Physics.
  18. Weber, T., Ingrisch, M., Bischl, B., & Rügamer, D. (2023, March 20). Cascaded Latent Diffusion Models for High-Resolution Chest X-ray Synthesis. Advances in Knowledge Discovery and Data Mining: 27th Pacific-Asia Conference, PAKDD 2023.
    link|pdf
  19. Ott, F., Raichur, N. L., Rügamer, D., Feigl, T., Neumann, H., Bischl, B., & Mutschler, C. (2023). Benchmarking Visual-Inertial Deep Multimodal Fusion for Relative Pose Regression and Odometry-aided Absolute Pose Regression. ArXiv:2208.00919.
    link|pdf
  20. Weber, T., Ingrisch, M., Bischl, B., & Rügamer, D. (2023). Implicit Embeddings via GAN Inversion for High Resolution Chest Radiographs. MICCAI Workshop on Medical Applications with Disentanglements 2022.
    link|pdf
  21. Dorigatti, E., Bischl, B., & Rügamer, D. (2023, January 23). Frequentist Uncertainty Quantification in Semi-Structured Neural Networks. International Conference on Artificial Intelligence and Statistics.
  22. Pielok, T., Bischl, B., & Rügamer, D. (2023, January 23). Approximate Bayesian Inference with Stein Functional Variational Gradient Descent. International Conference on Learning Representations.
    link|pdf
  23. Wimmer, L., Sale, Y., Hofman, P., Bischl, B., & Hüllermeier, E. (2023). Quantifying Aleatoric and Epistemic Uncertainty in Machine Learning: Are Conditional Entropy and Mutual Information Appropriate Measures? 39th Conference on Uncertainty in Artificial Intelligence (UAI 2023).
    link|pdf
  24. Gertheiss, J., Rügamer, D., Liew, B., & Greven, S. (2023). Functional Data Analysis: An Introduction and Recent Developments.
    link
  25. Hartl, W. H., Kopper, P., Xu, L., Heller, L., Mironov, M., Wang, R., Day, A. G., Elke, G., Küchenhoff, H., & Bender, A. (2023). Relevance of Protein Intake for Weaning in the Mechanically Ventilated Critically Ill: Analysis of a Large International Database. Critical Care Medicine.
    link
  26. Hendrix, P., Sun, C. C., Brighton, H., & Bender, A. (2023). On the Connection Between Language Change and Language Processing. Cognitive Science, 47(12), e13384.
    link|pdf
  27. Coens, F., Knops, N., Tieken, I., Vogelaar, S., Bender, A., Kim, J. J., Krupka, K., Pape, L., Raes, A., Tönshoff, B., Prytula, A., & Registry, C. (2023). Time-Varying Determinants of Graft Failure in Pediatric Kidney Transplantation in Europe. Clinical Journal of the American Society of Nephrology.
    link
  28. Wiegrebe, S., Kopper, P., Sonabend, R., Bischl, B., & Bender, A. (2023). Deep Learning for Survival Analysis: A Review (Number arXiv:2305.14961). arXiv.
    link|pdf
  29. Garces Arias, E., Pai, V., Schöffel, M., Heumann, C., & Aßenmacher, M. (2023). Automatic Transcription of Handwritten Old Occitan Language. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 15416–15439. https://doi.org/10.18653/v1/2023.emnlp-main.953
  30. Öztürk, I. T., Nedelchev, R., Heumann, C., Garces Arias, E., Roger, M., Bischl, B., & Aßenmacher, M. (2023). How Different Is Stereotypical Bias Across Languages? 3rd Workshop on Bias and Fairness in AI (Co-Located with ECML-PKDD 2023).
    link|pdf
  31. Witte, M., Schwenzow, J., Heitmann, M., Reisenbichler, M., & Aßenmacher, M. (2023). Potential for Decision Aids based on Natural Language Processing. Proceedings of the European Marketing Academy, 52nd, (114322).
    link|pdf
  32. Aßenmacher, M., Rauch, L., Goschenhofer, J., Stephan, A., Bischl, B., Roth, B., & Sick, B. (2023). Towards Enhancing Deep Active Learning with Weak Supervision and Constrained Clustering. Proceedings of the 7th Workshop on Interactive Adaptive Learning (Co-Located with ECML-PKDD 2023).
    link|pdf
  33. Akkus, C., Chu, L., Djakovic, V., Jauch-Walser, S., Koch, P., Loss, G., Marquardt, C., Moldovan, M., Sauter, N., Schneider, M., Schulte, R., Urbanczyk, K., Goschenhofer, J., Heumann, C., Hvingelby, R., Schalk, D., & Aßenmacher, M. (2023). Multimodal Deep Learning. ArXiv Preprint ArXiv:2301.04856.
    link|pdf
  34. Bischl, B., Binder, M., Lang, M., Pielok, T., Richter, J., Coors, S., Thomas, J., Ullmann, T., Becker, M., Boulesteix, A.-L., Deng, D., & Lindauer, M. (2023). Hyperparameter Optimization: Foundations, Algorithms, Best Practices, and Open Challenges. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, e1484. https://doi.org/10.1002/widm.1484
  35. Gündüz, H. A., Binder, M., To, X.-Y., Mreches, R., Bischl, B., McHardy, A. C., Münch, P. C., & Rezaei, M. (2023). A self-supervised deep learning method for data-efficient training in genomics. Communications Biology, 6(1), 928. https://doi.org/10.1038/s42003-023-05310-2
  36. König, G., Freiesleben, T., & Grosse-Wentrup, M. (2023). Improvement-focused Causal Recourse (ICR). 37th AAAI Conference.
  37. Koch, P., Nuñez, G. V., Garces Arias, E., Heumann, C., Schöffel, M., Häberlin, A., & Aßenmacher, M. (2023). A tailored Handwritten-Text-Recognition System for Medieval Latin. First Workshop on Ancient Language Processing (ALP 2023).
    link|pdf
  38. Luther, C., König, G., & Grosse-Wentrup, M. (2023). Efficient SAGE Estimation via Causal Structure Learning. AISTATS.
  39. Münch, P., Mreches, R., To, X.-Y., Gündüz, H. A., Moosbauer, J., Klawitter, S., Deng, Z.-L., Robertson, G., Rezaei, M., Asgari, E., Franzosa, E., Huttenhower, C., Bischl, B., McHardy, A., & Binder, M. (2023). A platform for deep learning on (meta)genomic sequences (preprint). https://doi.org/10.21203/rs.3.rs-2527258/v1
  40. Feurer, M., Eggensperger, K., Bergman, E., Pfisterer, F., Bischl, B., & Hutter, F. (2023). Mind the Gap: Measuring Generalization Performance Across Multiple Objectives. In B. Crémilleux, S. Hess, & S. Nijssen (Eds.), Advances in Intelligent Data Analysis XXI. IDA 2023. (Vol. 13876, pp. 130–142). Springer, Cham.
    link|arXiv|pdf
  41. Prager, R. P., Dietrich, K., Schneider, L., Schäpermeier, L., Bischl, B., Kerschke, P., Trautmann, H., & Mersmann, O. (2023). Neural Networks as Black-Box Benchmark Functions Optimized for Exploratory Landscape Features. Proceedings of the 17th ACM/SIGEVO Conference on Foundations of Genetic Algorithms, 129–139.
    link | pdf
  42. Purucker, L., Schneider, L., Anastacio, M., Beel, J., Bischl, B., & Hoos, H. (2023). Q(D)O-ES: Population-based Quality (Diversity) Optimisation for Post Hoc Ensemble Selection in AutoML. AutoML Conference 2023.
    link | pdf
  43. Rauch, L., Aßenmacher, M., Huseljic, D., Wirth, M., Bischl, B., & Sick, B. (2023). ActiveGLAE: A Benchmark for Deep Active Learning with Transformers. ECML-PKDD 2023.
    link|pdf
  44. Scheppach, A., Gündüz, H. A., Dorigatti, E., Münch, P. C., McHardy, A. C., Bischl, B., Rezaei, M., & Binder, M. (2023). Neural Architecture Search for Genomic Sequence Data. 2023 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), 1–10. https://doi.org/10.1109/CIBCB56990.2023.10264875
  45. Schneider, L., Bischl, B., & Thomas, J. (2023). Multi-Objective Optimization of Performance and Interpretability of Tabular Supervised Machine Learning Models. Proceedings of the Genetic and Evolutionary Computation Conference, 538–547.
    link | pdf
  46. Schulze, P., Wiegrebe, S., Thurner, P. W., Heumann, C., Aßenmacher, M., & Wankmüller, S. (2023). Exploring Topic-Metadata Relationships with the STM: A Bayesian Approach. Accepted at Advances in Statistical Analysis (AStA).
    link
  47. Fischer, S., Harutyunyan, L., Feurer, M., & Bischl, B. (2023). OpenML-CTR23 – A curated tabular regression benchmarking suite. AutoML Conference 2023 (Workshop).
    link|pdf
  48. Urchs, S., Thurner, V., Aßenmacher, M., Heumann, C., & Thiemichen, S. (2023). How Prevalent is Gender Bias in ChatGPT? - Exploring German and English ChatGPT Responses. 1st Workshop on Biased Data in Conversational Agents (Co-Located with ECML-PKDD 2023).
    link|pdf
  49. Vahidi, A., Wimmer, L., Gündüz, H. A., Bischl, B., Hüllermeier, E., & Rezaei, M. (2023). Diversified Ensemble of Independent Sub-Networks for Robust Self-Supervised Representation Learning. ArXiv Preprint ArXiv:2308.14705.
  50. Vogel, M., Aßenmacher, M., Gubler, A., Attin, T., & Schmidlin, P. R. (2023). Cleaning potential of interdental brushes around orthodontic brackets-an in vitro investigation. Swiss Dental Journal, 133(9).
    link|pdf
  51. Karl, F., Pielok, T., Moosbauer, J., Pfisterer, F., Coors, S., Binder, M., Schneider, L., Thomas, J., Richter, J., Lang, M., Garrido-Merchán, E. C., Branke, J., & Bischl, B. (2023). Multi-Objective Hyperparameter Optimization in Machine Learning – An Overview. ACM Transactions on Evolutionary Learning and Optimization, 3(4), 1–50.
  52. Dandl, S., Casalicchio, G., Bischl, B., & Bothmann, L. (2023). Interpretable Regional Descriptors: Hyperbox-Based Local Explanations. In D. Koutra, C. Plant, M. Gomez Rodriguez, E. Baralis, & F. Bonchi (Eds.), ECML PKDD 2023: Machine Learning and Knowledge Discovery in Databases: Research Track (pp. 479–495). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-43418-1_29
  53. Dandl, S., Hofheinz, A., Binder, M., Bischl, B., & Casalicchio, G. (2023). counterfactuals: An R Package for Counterfactual Explanation Methods [ArXiv]. 2304.06569 v2. https://doi.org/10.48550/arXiv.2304.06569
  54. Molnar, C., König, G., Bischl, B., & Casalicchio, G. (2023). Model-agnostic Feature Importance and Effects with Dependent Features–A Conditional Subgroup Approach. Data Mining and Knowledge Discovery. https://doi.org/10.1007/s10618-022-00901-9
  55. Scholbeck, C. A., Funk, H., & Casalicchio, G. (2023). Algorithm-Agnostic Feature Attributions for Clustering. In L. Longo (Ed.), Explainable Artificial Intelligence (pp. 217–240). Springer Nature Switzerland.
    link | pdf
  56. Molnar, C., Freiesleben, T., König, G., Herbinger, J., Reisinger, T., Casalicchio, G., Wright, M. N., & Bischl, B. (2023). Relating the Partial Dependence Plot and Permutation Feature Importance to the Data Generating Process. In L. Longo (Ed.), Explainable Artificial Intelligence (pp. 456–479). Springer Nature Switzerland.
    link | pdf
  57. Herbinger, J., Bischl, B., & Casalicchio, G. (2023). Decomposing Global Feature Effects Based on Feature Interactions. ArXiv Preprint ArXiv:2306.00541.
    link | pdf
  58. Löwe, H., Scholbeck, C. A., Heumann, C., Bischl, B., & Casalicchio, G. (2023). fmeffects: An R Package for Forward Marginal Effects. ArXiv Preprint ArXiv:2310.02008.
    link | pdf
  59. Scholbeck, C. A., Moosbauer, J., Casalicchio, G., Gupta, H., Bischl, B., & Heumann, C. (2023). Position Paper: Bridging the Gap Between Machine Learning and Sensitivity Analysis. ArXiv Preprint ArXiv:2312.13234.
    link | pdf
  60. Stüber, A. T., Coors, S., Schachtner, B., Weber, T., Rügamer, D., Bender, A., Mittermeier, A., Öcal, O., Seidensticker, M., Ricke, J., & others. (2023). A Comprehensive Machine Learning Benchmark Study for Radiomics-Based Survival Analysis of CT Imaging Data in Patients With Hepatic Metastases of CRC. Investigative Radiology, 10–1097.
    link
  61. Rügamer, D., Kolb, C., & Klein, N. (2023). Semi-Structured Distributional Regression. The American Statistician.
    link|pdf

2022

  1. Dorigatti, E., Bischl, B., & Schubert, B. (2022). Improved proteasomal cleavage prediction with positive-unlabeled learning. Extended Abstract Presented at Machine Learning for Health (ML4H) Symposium 2022, November 28th, 2022, New Orleans, United States & Virtual.
    link | pdf
  2. Rügamer, D., Baumann, P. F. M., Kneib, T., & Hothorn, T. (2022). Probabilistic Time Series Forecasts with Autoregressive Transformation Models. Statistics & Computing.
    link|pdf
  3. Ziegler, I., Ma, B., Nie, E., Bischl, B., Rügamer, D., Schubert, B., & Dorigatti, E. (2022). What cleaves? Is proteasomal cleavage prediction reaching a ceiling? Extended Abstract Presented at the NeurIPS Learning Meaningful Representations of Life (LMRL) Workshop 2022.
    link | pdf
  4. Ott, F., Rügamer, D., Heublein, L., Bischl, B., & Mutschler, C. (2022, October 24). Representation Learning for Tablet and Paper Domain Adaptation in favor of Online Handwriting Recognition. MPRSS 2022.
  5. Ziegler, I., Ma, B., Nie, E., Bischl, B., Rügamer, D., Schubert, B., & Dorigatti, E. (2022, October 24). What cleaves? Is proteasomal cleavage prediction reaching a ceiling? NeurIPS 2022 Workshop on Learning Meaningful Representations of Life (LMRL).
    link|pdf
  6. Kaiser, P., Rügamer, D., & Kern, C. (2022, October 24). Uncertainty as a key to fair data-driven decision making. NeurIPS 2022 Workshop on Trustworthy and Socially Responsible Machine Learning (TSRML).
    link|pdf
  7. Rezaei, M., Dorigatti, E., Rügamer, D., & Bischl, B. (2022, October 21). Joint Debiased Representation Learning and Imbalanced Data Clustering. ArXiv Preprint ArXiv:2109.05232.
    link|pdf
  8. Bothmann, L. (2022). Künstliche Intelligenz in der Strafverfolgung. In K. Peters (Ed.), Cyberkriminalität. LMU Munich.
    link
  9. Ghada, W., Casellas, E., Herbinger, J., Garcia-Benadí, A., Bothmann, L., Estrella, N., Bech, J., & Menzel, A. (2022). Stratiform and Convective Rain Classification Using Machine Learning Models and Micro Rain Radar. Remote Sensing, 14(18).
    link
  10. Ramjith, J., Bender, A., Roes, K. C. B., & Jonker, M. A. (2022). Recurrent Events Analysis with Piece-Wise Exponential Additive Mixed Models. Statistical Modelling, 1471082X221117612.
    link|pdf
  11. Ott, F., Rügamer, D., Heublein, L., Hamann, T., Barth, J., Bischl, B., & Mutschler, C. (2022). Benchmarking Online Sequence-to-Sequence and Character-based Handwriting Recognition from IMU-Enhanced Pens. International Journal on Document Analysis and Recognition (IJDAR).
    link|pdf
  12. Schiele, P., Berninger, C., & Rügamer, D. (2022). ARMA Cell: A Modular and Effective Approach for Neural Autoregressive Modeling. ArXiv Preprint ArXiv:2208.14919.
    link|pdf
  13. Schalk, D., Bischl, B., & Rügamer, D. (2022). Accelerated Componentwise Gradient Boosting using Efficient Data Representation and Momentum-based Optimization. Journal of Computational and Graphical Statistics.
    link | pdf
  14. Rath, K., Rügamer, D., Bischl, B., von Toussaint, U., Rea, C., Maris, A., Granetz, R., & Albert, C. (2022). Data augmentation for disruption prediction via robust surrogate models. Journal of Plasma Physics.
  15. Dandl, S., Pfisterer, F., & Bischl, B. (2022). Multi-Objective Counterfactual Fairness. Proceedings of the Genetic and Evolutionary Computation Conference Companion, 328–331.
    link
  16. Mittermeier, M., Weigert, M., Rügamer, D., Küchenhoff, H., & Ludwig, R. (2022). A Deep Learning Version of Hess & Brezowskys Classification of Großwetterlagen over Europe: Projection of Future Changes in a CMIP6 Large Ensemble. Environmental Research Letters.
  17. Ott, F., Rügamer, D., Heublein, L., Bischl, B., & Mutschler, C. (2022, June 29). Domain Adaptation for Time-Series Classification to Mitigate Covariate Shift. ACM Multimedia.
    link|pdf
  18. Rügamer, D., Bender, A., Wiegrebe, S., Racek, D., Bischl, B., Müller, C., & Stachl, C. (2022, June 14). Factorized Structured Regression for Large-Scale Varying Coefficient Models. Machine Learning and Knowledge Discovery in Databases (ECML-PKDD).
    link|pdf
  19. Beaudry, G., Drouin, O., Gravel, J., Smyrnova, A., Bender, A., Orri, M., Geoffroy, M.-C., & Chadi, N. (2022). A Comparative Analysis of Pediatric Mental Health-Related Emergency Department Utilization in Montréal, Canada, before and during the COVID-19 Pandemic. Annals of General Psychiatry, 21(1), 17.
    link|pdf
  20. Klaß, A., Lorenz, S., Lauer-Schmaltz, M., Rügamer, D., Bischl, B., Mutschler, C., & Ott, F. (2022, June 4). Uncertainty-aware Evaluation of Time-Series Classification for Online Handwriting Recognition with Domain Shift. IJCAI-ECAI 2022, 1st International Workshop on Spatio-Temporal Reasoning and Learning.
  21. Fritz, C., Nicola, G. D., Günther, F., Rügamer, D., Rave, M., Schneble, M., Bender, A., Weigert, M., Brinks, R., Hoyer, A., Berger, U., Küchenhoff, H., & Kauermann, G. (2022). Challenges in Interpreting Epidemiological Surveillance Data - Experiences from Germany. Journal of Computational & Graphical Statistics.
  22. Rügamer, D. (2022). Additive Higher-Order Factorization Machines. ArXiv Preprint ArXiv:2205.14515.
    link|pdf
  23. Rügamer, D., Kolb, C., Fritz, C., Pfisterer, F., Kopper, P., Bischl, B., Shen, R., Bukas, C., de Andrade e Sousa, L. B., Thalmeier, D., Baumann, P., Kook, L., Klein, N., & Müller, C. L. (2022). deepregression: a Flexible Neural Network Framework for Semi-Structured Deep Distributional Regression. Journal of Statistical Software (Provisionally Accepted).
    link|pdf
  24. Schalk, D., Hoffmann, V., Bischl, B., & Mansmann, U. (2022). Distributed non-disclosive validation of predictive models by a modified ROC-GLM. ArXiv Preprint ArXiv:2202.10828.
    link | pdf
  25. Liew, B. X. W., Kovacs, F. M., Rügamer, D., & Royuela, A. (2022). Machine learning for prognostic modelling in individuals with non-specific neck pain. European Spine Journal.
  26. Fritz, C., Dorigatti, E., & Rügamer, D. (2022). Combining Graph Neural Networks and Spatio-temporal Disease Models to Predict COVID-19 Cases in Germany. Scientific Reports, 12, 2045–2322.
    link|pdf
  27. Rügamer, D., Baumann, P., & Greven, S. (2022). Selective Inference for Additive and Mixed Models. Computational Statistics and Data Analysis, 167, 107350.
    link|pdf
  28. Ott, F., Rügamer, D., Heublein, L., Bischl, B., & Mutschler, C. (2022). Cross-Modal Common Representation Learning with Triplet Loss Functions. ArXiv Preprint ArXiv:2202.07901.
    link|pdf
  29. Dorigatti, E., Goschenhofer, J., Schubert, B., Rezaei, M., & Bischl, B. (2022). Positive-Unlabeled Learning with Uncertainty-aware Pseudo-label Selection. ArXiv Preprint ArXiv:2109.05232.
    link|pdf
  30. Kopper, P., Wiegrebe, S., Bischl, B., Bender, A., & Rügamer, D. (2022). DeepPAMM: Deep Piecewise Exponential Additive Mixed Models for Complex Hazard Structures in Survival Analysis. Advances in Knowledge Discovery and Data Mining, 249–261.
    link|pdf
  31. Hartl, W. H., Kopper, P., Bender, A., Scheipl, F., Day, A. G., Elke, G., & Küchenhoff, H. (2022). Protein intake and outcome of critically ill patients: analysis of a large international database using piece-wise exponential additive mixed models. Critical Care, 26, 7.
    link|pdf
  32. Pretzsch, E., Heinemann, V., Stintzing, S., Bender, A., Chen, S., Holch, J. W., Hofmann, F. O., Ren, H., Bösch, F., Küchenhoff, H., Werner, J., & Angele, M. K. (2022). EMT-Related Genes Have No Prognostic Relevance in Metastatic Colorectal Cancer as Opposed to Stage II/III: Analysis of the Randomised, Phase III Trial FIRE-3 (AIO KRK 0306; FIRE-3). Cancers, 14(22), 5596.
    link|pdf
  33. Aßenmacher, M., Dietrich, M., Elmaklizi, A., Hemauer, E. M., & Wagenknecht, N. (2022). Whitepaper: New Tools for Old Problems. https://doi.org/10.5281/zenodo.6606451
  34. Böhme, R., Coors, S., Oster, P., Munser-Kiefer, M., & Hilbert, S. (2022). Machine learning for spelling acquisition - How accurate is the prediction of specific spelling errors in German primary school students? PsyArXiv. https://doi.org/10.31234/osf.io/shguf
  35. Deng, D., Karl, F., Hutter, F., Bischl, B., & Lindauer, M. (2022). Efficient Automated Deep Learning for Time Series Forecasting. Joint European Conference on Machine Learning and Knowledge Discovery in Databases.
    link
  36. Freiesleben, T., König, G., Molnar, C., & Tejero-Cantero, A. (2022). Scientific inference with interpretable machine learning: Analyzing models to learn about real-world phenomena. ArXiv Preprint ArXiv:2206.05487.
  37. Gijsbers, P., Bueno, M. L. P., Coors, S., LeDell, E., Poirier, S., Thomas, J., Bischl, B., & Vanschoren, J. (2022). AMLB: an AutoML Benchmark. ArXiv Preprint ArXiv:2207.12560.
    link | pdf
  38. Hurmer, N., To, X.-Y., Binder, M., Gündüz, H. A., Münch, P. C., Mreches, R., McHardy, A. C., Bischl, B., & Rezaei, M. (2022). Transformer Model for Genome Sequence Analysis. LMRL Workshop - NeurIPS 2022.
    link | pdf
  39. Koch, P., Aßenmacher, M., & Heumann, C. (2022). Pre-trained language models evaluating themselves - A comparative study. Proceedings of the Third Workshop on Insights from Negative Results in NLP, 180–187.
    link|pdf
  40. Lebmeier, E., Aßenmacher, M., & Heumann, C. (2022, September). On the current state of reproducibility and reporting of uncertainty for Aspect-based Sentiment Analysis. Machine Learning and Knowledge Discovery in Databases (ECML-PKDD).
    pdf
  41. Li*, Y., Khakzar*, A., Zhang, Y., Sanisoglu, M., Kim, S. T., Rezaei, M., Bischl, B., & Navab, N. (2022). Analyzing the Effects of Handling Data Imbalance on Learned Features from Medical Images by Looking Into the Models. 2nd Workshop on Interpretable Machine Learning in Healthcare (IMLH 2022) at the the 39th International Conference on Machine Learning (ICML 2022).
  42. Moosbauer, J., Binder, M., Schneider, L., Pfisterer, F., Becker, M., Lang, M., Kotthoff, L., & Bischl, B. (2022). Automated Benchmark-Driven Design and Explanation of Hyperparameter Optimizers. IEEE Transactions on Evolutionary Computation, 26(6), 1336–1350.
    link | pdf
  43. Pargent, F., Pfisterer, F., Thomas, J., & Bischl, B. (2022). Regularized target encoding outperforms traditional methods in supervised machine learning with high cardinality features. Computational Statistics, 1–22.
    link | pdf
  44. Pfisterer, F., Schneider, L., Moosbauer, J., Binder, M., & Bischl, B. (2022). Yahpo Gym – An Efficient Multi-Objective Multi-Fidelity Benchmark for Hyperparameter Optimization. International Conference on Automated Machine Learning, 3–1.
    link | pdf
  45. Schneider, L., Pfisterer, F., Kent, P., Branke, J., Bischl, B., & Thomas, J. (2022). Tackling Neural Architecture Search With Quality Diversity Optimization. International Conference on Automated Machine Learning, 9–1.
    link | pdf
  46. Schneider, L., Pfisterer, F., Thomas, J., & Bischl, B. (2022). A Collection of Quality Diversity Optimization Problems Derived from Hyperparameter Optimization of Machine Learning Models. Proceedings of the Genetic and Evolutionary Computation Conference Companion, 2136–2142.
    link | pdf
  47. Schneider, L., Schäpermeier, L., Prager, R. P., Bischl, B., Trautmann, H., & Kerschke, P. (2022). HPO X ELA: Investigating Hyperparameter Optimization Landscapes by Means of Exploratory Landscape Analysis. Parallel Problem Solving from Nature – PPSN XVII, 575–589.
    link | pdf
  48. Sonabend, R., Bender, A., & Vollmer, S. (2022). Avoiding C-hacking When Evaluating Survival Distribution Predictions with Discrimination Measures. Bioinformatics, 38(17), 4178–4184.
    link|pdf
  49. Turkoglu, M. O., Becker, A., Gündüz, H. A., Rezaei, M., Bischl, B., Daudt, R. C., D’Aronco, S., Wegner, J. D., & Schindler, K. (2022). FiLM-Ensemble: Probabilistic Deep Learning via Feature-wise Linear Modulation. Advances in Neural Information Processing Systems (NeurIPS 2022).
    link | pdf
  50. Au, Q., Herbinger, J., Stachl, C., Bischl, B., & Casalicchio, G. (2022). Grouped Feature Importance and Combined Features Effect Plot. Data Mining and Knowledge Discovery, 36(4), 1401–1450.
    link | pdf
  51. Bothmann, L., Strickroth, S., Casalicchio, G., Rügamer, D., Lindauer, M., Scheipl, F., & Bischl, B. (2022). Developing Open Source Educational Resources for Machine Learning and Data Science. In K. M. Kinnaird, P. Steinbach, & O. Guhr (Eds.), Proceedings of the Third Teaching Machine Learning and Artificial Intelligence Workshop (Vol. 207, pp. 1–6). PMLR.
    link | pdf
  52. Herbinger, J., Bischl, B., & Casalicchio, G. (2022). REPID: Regional Effect Plots with implicit Interaction Detection. International Conference on Artificial Intelligence and Statistics (AISTATS), 25.
    link | pdf
  53. Moosbauer, J., Casalicchio, G., Lindauer, M., & Bischl, B. (2022). Enhancing Explainability of Hyperparameter Optimization via Bayesian Algorithm Execution. ArXiv:2111.14756 [Cs.LG].
    link | pdf
  54. Nießl, C., Herrmann, M., Wiedemann, C., Casalicchio, G., & Boulesteix, A.-L. (2022). Over-optimism in benchmark studies and the multiplicity of design and analysis options when interpreting their results. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 12(2), e1441.
    link | pdf
  55. Scholbeck, C. A., Casalicchio, G., Molnar, C., Bischl, B., & Heumann, C. (2022). Marginal Effects for Non-Linear Prediction Functions. In To Appear in Data Mining and Knowledge Discovery.
    link | pdf
  56. Molnar, C., König, G., Herbinger, J., Freiesleben, T., Dandl, S., Scholbeck, C. A., Casalicchio, G., Grosse-Wentrup, M., & Bischl, B. (2022). General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models. In A. Holzinger, R. Goebel, R. Fong, T. Moon, K.-R. Müller, & W. Samek (Eds.), xxAI - Beyond Explainable AI: International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers (pp. 39–68). Springer International Publishing. https://doi.org/10.1007/978-3-031-04083-2_4
  57. Goschenhofer, J., Ragupathy, P., Heumann, C., Bischl, B., & Aßenmacher, M. (2022, December 7). CC-Top: Constrained Clustering for Dynamic Topic Discovery. Workshop on Ever Evolving NLP (EvoNLP).
    link|pdf
  58. Dexl, J., Benz, M., Kuritcyn, P., Wittenberg, T., Bruns, V., Geppert, C., Hartmann, A., Bischl, B., & Goschenhofer, J. (2022). Robust Colon Tissue Cartography with Semi-Supervision. Current Directions in Biomedical Engineering, 8(2), 344–347.
    link|pdf
  59. Rueger, S., Goschenhofer, J., Nath, A., Firsching, M., Ennen, A., & Bischl, B. (2022). Deep-Learning-based Aluminum Sorting on Dual Energy X-Ray Transmission Data. Sensor-Based Sorting and Control. https://doi.org/10.2370/9783844085457

2021

  1. Hilbert, S., Coors, S., Kraus, E., Bischl, B., Lindl, A., Frei, M., Wild, J., Krauss, S., Goretzko, D., & Stachl, C. (2021). Machine learning for the educational sciences. Review of Education, 9(3), e3310. https://doi.org/https://doi.org/10.1002/rev3.3310
  2. Liew, B. X. W., Rügamer, D., Duffy, K., Taylor, M., & Jackson, J. (2021). The mechanical energetics of walking across the adult lifespan. PloS One, 16(11), e0259817.
    link
  3. Mittermeier, M., Weigert, M., & Rügamer, D. (2021). Identifying the atmospheric drivers of drought and heat using a smoothed deep learning approach. NeurIPS 2021, Tackling Climate Change with Machine Learning.
    link|pdf
  4. Weber, T., Ingrisch, M., Bischl, B., & Rügamer, D. (2021). Towards modelling hazard factors in unstructured data spaces using gradient-based latent interpolation. NeurIPS 2021 Workshops, Deep Generative Models and Downstream Applications.
    link|pdf
  5. Weber, T., Ingrisch, M., Fabritius, M., Bischl, B., & Rügamer, D. (2021). Survival-oriented embeddings for improving accessibility to complex data structures. NeurIPS 2021 Workshops, Bridging the Gap: From Machine Learning Research to Clinical Practice.
    link|pdf
  6. Liew, B. X. W., Rügamer, D., Zhai, X. J., Morris, S., & Netto, K. (2021). Comparing machine, deep, and transfer learning in predicting joint moments in running. Journal of Biomechanics.
  7. Ott, F., Rügamer, D., Heublein, L., Bischl, B., & Mutschler, C. (2021, October 3). Joint Classification and Trajectory Regression of Online Handwriting using a Multi-Task Learning Approach. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV).
  8. Goschenhofer, J., Hvingelby, R., Rügamer, D., Thomas, J., Wagner, M., & Bischl, B. (2021, September 18). Deep Semi-Supervised Learning for Time Series Classification. 20th IEEE International Conference on Machine Learning and Applications (ICMLA).
    link | pdf
  9. Python, A., Bender, A., Blangiardo, M., Illian, J. B., Lin, Y., Liu, B., Lucas, T. C. D., Tan, S., Wen, Y., Svanidze, D., & Yin, J. (2021). A Downscaling Approach to Compare COVID-19 Count Data from Databases Aggregated at Different Spatial Scales. Journal of the Royal Statistical Society: Series A (Statistics in Society). https://doi.org/10.1111/rssa.12738
  10. Bauer, A., Klima, A., Gauß, J., Kümpel, H., Bender, A., & Küchenhoff, H. (2021). Mundus Vult Decipi, Ergo Decipiatur: Visual Communication of Uncertainty in Election Polls. PS: Political Science & Politics, 1–7. https://doi.org/10.1017/S1049096521000950
  11. Rezaei, M., Soleymani, F., Bischl, B., & Azizi, S. (2021). Deep Bregman Divergence for Contrastive Learning of Visual Representations. ArXiv Preprint ArXiv:2109.07455.
  12. Soleymani, F., Eslami, M., Elze, T., Bischl, B., & Rezaei, M. (2021). Deep Variational Clustering Framework for Self-labeling of Large-scale Medical Images. ArXiv Preprint ArXiv:2109.10777.
  13. Fabritius, M. P., Seidensticker, M., Rueckel, J., Heinze, C., Pech, M., Paprottka, K. J., Paprottka, P. M., Topalis, J., Bender, A., Ricke, J., Mittermeier, A., & Ingrisch, M. (2021). Bi-Centric Independent Validation of Outcome Prediction after Radioembolization of Primary and Secondary Liver Cancer. Journal of Clinical Medicine, 10(16), 3668. https://doi.org/10.3390/jcm10163668
  14. Pfisterer, F., Kern, C., Dandl, S., Sun, M., Kim, M. P., & Bischl, B. (2021). mcboost: Multi-Calibration Boosting for R. Journal of Open Source Software, 6(64), 3453. https://doi.org/10.21105/joss.03453
  15. Falla, D., Devecchi, V., Jimenez-Grande, D., Rügamer, D., & Liew, B. (2021). Modern Machine Learning Approaches Applied in Spinal Pain Research. Journal of Electromyography and Kinesiology.
  16. *Coors, S., *Schalk, D., Bischl, B., & Rügamer, D. (2021). Automatic Componentwise Boosting: An Interpretable AutoML System. ECML-PKDD Workshop on Automating Data Science.
    link | pdf
  17. Berninger, C., Stöcker, A., & Rügamer, D. (2021). A Bayesian Time-Varying Autoregressive Model for Improved Short- and Long-Term Prediction. Journal of Forecasting.
    link|pdf
  18. Python, A., Bender, A., Nandi, A. K., Hancock, P. A., Arambepola, R., Brandsch, J., & Lucas, T. C. D. (2021). Predicting non-state terrorism worldwide. Science Advances, 7(31), eabg4778. https://doi.org/10.1126/sciadv.abg4778
  19. Baumann, P. F. M., Hothorn, T., & Rügamer, D. (2021). Deep Conditional Transformation Models. Machine Learning and Knowledge Discovery in Databases. Research Track, 3–18.
    link|pdf
  20. Gijsbers, P., Pfisterer, F., van Rijn, J. N., Bischl, B., & Vanschoren, J. (2021). Meta-Learning for Symbolic Hyperparameter Defaults. In 2021 Genetic and Evolutionary Computation Conference Companion (GECCO ’21 Companion). ACM. https://doi.org/10.1145/3449726.3459532
  21. Pfisterer, F., van Rijn, J. N., Probst, P., Müller, A., & Bischl, B. (2021). Learning Multiple Defaults for Machine Learning Algorithms. 2021 Genetic and Evolutionary Computation Conference Companion (GECCO ’21 Companion). https://doi.org/10.1145/3449726.3459532
  22. Rath, K., Albert, C. G., Bischl, B., & von Toussaint, U. (2021). Symplectic Gaussian process regression of maps in Hamiltonian systems. Chaos: An Interdisciplinary Journal of Nonlinear Science, 31(5), 053121. https://doi.org/10.1063/5.0048129
  23. König, G., Molnar, C., Bischl, B., & Grosse-Wentrup, M. (2021). Relative Feature Importance. 2020 25th International Conference on Pattern Recognition (ICPR), 9318–9325.
    link | pdf
  24. Gerostathopoulos, I., Plášil, F., Prehofer, C., Thomas, J., & Bischl, B. (2021). Automated Online Experiment-Driven Adaptation–Mechanics and Cost Aspects. IEEE Access, 9, 58079–58087.
    link | pdf
  25. Liew, B., Lee, H. Y., Rügamer, D., Nunzio, A. M. D., Heneghan, N. R., Falla, D., & Evans, D. W. (2021). A novel metric of reliability in pressure pain threshold measurement. Scientific Reports (Nature).
  26. Küchenhoff, H., Günther, F., Höhle, M., & Bender, A. (2021). Analysis of the early COVID-19 epidemic curve in Germany by regression models with change points. Epidemiology & Infection, 1–17. https://doi.org/10.1017/S0950268821000558
  27. Bender, A., Rügamer, D., Scheipl, F., & Bischl, B. (2021). A General Machine Learning Framework for Survival Analysis. In F. Hutter, K. Kersting, J. Lijffijt, & I. Valera (Eds.), Machine Learning and Knowledge Discovery in Databases (pp. 158–173). Springer International Publishing. https://doi.org/10.1007/978-3-030-67664-3_10
  28. Sonabend, R., Király, F. J., Bender, A., Bischl, B., & Lang, M. (2021). mlr3proba: An R Package for Machine Learning in Survival Analysis. Bioinformatics, btab039. https://doi.org/10.1093/bioinformatics/btab039
  29. Agrawal, A., Pfisterer, F., Bischl, B., Chen, J., Sood, S., Shah, S., Buet-Golfouse, F., Mateen, B. A., & Vollmer, S. J. (2021). Debiasing classifiers: is reality at variance with expectation? Available at SSRN 3711681.
    link
  30. Liew, B. X. W., Rügamer, D., De Nunzio, A., & Falla, D. (2021). Harnessing time-series kinematic and electromyography signals as predictors to discriminate amongst low back pain recovery status. Brain and Spine, 1, 100236.
  31. Kaminwar, S. R., Goschenhofer, J., Thomas, J., Thon, I., & Bischl, B. (2021). Structured Verification of Machine Learning Models in Industrial Settings. Big Data.
    link
  32. Kopper, P., Pölsterl, S., Wachinger, C., Bischl, B., Bender, A., & Rügamer, D. (2021). Semi-Structured Deep Piecewise Exponential Models. In R. Greiner, N. Kumar, T. A. Gerds, & M. van der Schaar (Eds.), Proceedings of AAAI Spring Symposium on Survival Prediction - Algorithms, Challenges, and Applications 2021 (Vol. 146, pp. 40–53). PMLR.
    link|pdf
  33. Becker, M., Binder, M., Bischl, B., Lang, M., Pfisterer, F., Reich, N. G., Richter, J., Schratz, P., & Sonabend, R. (2021). mlr3 book.
    link
  34. Binder, M., Pfisterer, F., Lang, M., Schneider, L., Kotthoff, L., & Bischl, B. (2021). mlr3pipelines - Flexible Machine Learning Pipelines in R. Journal of Machine Learning Research, 22(184), 1–7.
    link | pdf
  35. Schneider, L., Pfisterer, F., Binder, M., & Bischl, B. (2021). Mutation is All You Need. 8th ICML Workshop on Automated Machine Learning.
    pdf
  36. Bischl, B., Casalicchio, G., Feurer, M., Gijsbers, P., Hutter, F., Lang, M., Mantovani, R. G., van Rijn, J. N., & Vanschoren, J. (2021). OpenML Benchmarking Suites. In J. Vanschoren & S. Yeung (Eds.), Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks (Vol. 1).
    link | pdf
  37. Moosbauer, J., Herbinger, J., Casalicchio, G., Lindauer, M., & Bischl, B. (2021). Explaining Hyperparameter Optimization via Partial Dependence Plots. Advances in Neural Information Processing Systems (NeurIPS 2021), 34.
    link | pdf
  38. Moosbauer, J., Herbinger, J., Casalicchio, G., Lindauer, M., & Bischl, B. (2021). Towards Explaining Hyperparameter Optimization via Partial Dependence Plots. 8th ICML Workshop on Automated Machine Learning (AutoML).
    link | pdf
  39. König, G., Freiesleben, T., Bischl, B., Casalicchio, G., & Grosse-Wentrup, M. (2021). Decomposition of Global Feature Importance into Direct and Associative Components (DEDACT). ArXiv Preprint ArXiv:2106.08086.
    link | pdf

2020

  1. Günther, F., Bender, A., Katz, K., Küchenhoff, H., & Höhle, M. (2020). Nowcasting the COVID-19 pandemic in Bavaria. Biometrical Journal.
    link|pdf
  2. Liew, B. X. W., Peolsson, A., Rügamer, D., Wibault, J., Löfgren, H., Dedering, A., Zsigmond, P., & Falla, D. (2020). Clinical predictive modelling of post-surgical recovery in individuals with cervical radiculopathy – a machine learning approach. Scientific Reports.
    link
  3. Rügamer, D., Pfisterer, F., & Bischl, B. (2020). Neural Mixture Distributional Regression. ArXiv:2010.06889 [Cs, Stat].
    link|pdf
  4. Guenther, F., Bender, A., Höhle, M., Wildner, M., & Küchenhoff, H. (2020). Analysis of the COVID-19 pandemic in Bavaria: adjusting for misclassification. MedRxiv, 2020.09.29.20203877. https://doi.org/10.1101/2020.09.29.20203877
  5. Dandl, S., Molnar, C., Binder, M., & Bischl, B. (2020). Multi-Objective Counterfactual Explanations. In T. Bäck, M. Preuss, A. Deutz, H. Wang, C. Doerr, M. Emmerich, & H. Trautmann (Eds.), Parallel Problem Solving from Nature – PPSN XVI (pp. 448–469). Springer International Publishing.
    link
  6. Schratz, P., Muenchow, J., Iturritxa, E., Cortés, J., Bischl, B., & Brenning, A. (2020). Monitoring forest health using hyperspectral imagery: Does feature selection improve the performance of machine-learning techniques?
    link
  7. Bender, A., Python, A., Lindsay, S. W., Golding, N., & Moyes, C. L. (2020). Modelling geospatial distributions of the triatomine vectors of Trypanosoma cruzi in Latin America. PLOS Neglected Tropical Diseases, 14(8), e0008411. https://doi.org/10.1371/journal.pntd.0008411
  8. Binder, M., Pfisterer, F., & Bischl, B. (2020, July 18). Collecting Empirical Data About Hyperparameters for Data Driven AutoML. Proceedings of the 7th ICML Workshop on Automated Machine Learning (AutoML 2020).
    pdf
  9. Binder, M., Moosbauer, J., Thomas, J., & Bischl, B. (2020). Multi-Objective Hyperparameter Tuning and Feature Selection Using Filter Ensembles. Proceedings of the 2020 Genetic and Evolutionary Computation Conference, 471–479. https://doi.org/10.1145/3377930.3389815
  10. Beggel, L., Pfeiffer, M., & Bischl, B. (2020). Robust Anomaly Detection in Images using Adversarial Autoencoders. Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 206–222.
    link | pdf
  11. Dorigatti, E., & Schubert, B. (2020). Joint epitope selection and spacer design for string-of-beads vaccines. BioRxiv. https://doi.org/10.1101/2020.04.25.060988
  12. Pfister, F. M. J., Um, T. T., Pichler, D. C., Goschenhofer, J., Abedinpour, K., Lang, M., Endo, S., Ceballos-Baumann, A. O., Hirche, S., Bischl, B., & others. (2020). High-Resolution Motor State Detection in parkinson’s Disease Using convolutional neural networks. Scientific Reports, 10(1), 1–11.
    link
  13. Scholbeck, C. A., Molnar, C., Heumann, C., Bischl, B., & Casalicchio, G. (2020). Sampling, Intervention, Prediction, Aggregation: A Generalized Framework for Model-Agnostic Interpretations. In P. Cellier & K. Driessens (Eds.), Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2019 (pp. 205–216). Springer International Publishing.
    link | pdf
  14. Liew, B. X. W., Rügamer, D., Stöcker, A., & De Nunzio, A. M. (2020). Classifying neck pain status using scalar and functional biomechanical variables – development of a method using functional data boosting. Gait & Posture, 75, 146–150.
    link
  15. Liew, B. X. W., Rügamer, D., Abichandani, D., & De Nunzio, A. M. (2020). Classifying individuals with and without patellofemoral pain syndrome using ground force profiles – Development of a method using functional data boosting. Gait & Posture, 80, 90–95.
    link
  16. Bommert, A., Sun, X., Bischl, B., Rahnenführer, J., & Lang, M. (2020). Benchmark for filter methods for feature selection in high-dimensional classification data. Computational Statistics & Data Analysis, 143, 106839.
    link | pdf
  17. Brockhaus, S., Rügamer, D., & Greven, S. (2020). Boosting Functional Regression Models with FDboost. Journal of Statistical Software, 94(10), 1–50.
  18. Ellenbach, N., Boulesteix, A.-L., Bischl, B., Unger, K., & Hornung, R. (2020). Improved Outcome Prediction Across Data Sources Through Robust Parameter Tuning. Journal of Classification, 1–20.
    link|pdf
  19. Dorigatti, E., & Schubert, B. (2020). Graph-theoretical formulation of the generalized epitope-based vaccine design problem. PLOS Computational Biology, 16(10), e1008237. https://doi.org/10.1371/journal.pcbi.1008237
  20. Goerigk, S., Hilbert, S., Jobst, A., Falkai, P., Bühner, M., Stachl, C., Bischl, B., Coors, S., Ehring, T., Padberg, F., & Sarubin, N. (2020). Predicting instructed simulation and dissimulation when screening for depressive symptoms. European Archives of Psychiatry and Clinical Neuroscience, 270(2), 153–168.
    link
  21. Liew, B., Rügamer, D., De Nunzio, A., & Falla, D. (2020). Interpretable machine learning models for classifying low back pain status using functional physiological variables. European Spine Journal, 29, 1845–1859.
    link
  22. Rügamer, D., & Greven, S. (2020). Inference for L2-Boosting. Statistics and Computing, 30, 279–289.
    link|pdf
  23. Stachl, C., Au, Q., Schoedel, R., Gosling, S. D., Harari, G. M., Buschek, D., Völkel, S. T., Schuwerk, T., Oldemeier, M., Ullmann, T., & others. (2020). Predicting personality from patterns of behavior collected with smartphones. Proceedings of the National Academy of Sciences.
    link | pdf
  24. Sun, X., Bommert, A., Pfisterer, F., Rähenfürher, J., Lang, M., & Bischl, B. (2020). High Dimensional Restrictive Federated Model Selection with Multi-objective Bayesian Optimization over Shifted Distributions. In Y. Bi, R. Bhatia, & S. Kapoor (Eds.), Intelligent Systems and Applications (pp. 629–647). Springer International Publishing. https://doi.org/10.1007/978-3-030-29516-5_48
  25. Molnar, C., Casalicchio, G., & Bischl, B. (2020). Quantifying Model Complexity via Functional Decomposition for Better Post-hoc Interpretability. In P. Cellier & K. Driessens (Eds.), Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2019 (pp. 193–204). Springer International Publishing. link | pdf
  26. Molnar, C., Casalicchio, G., & Bischl, B. (2020). Interpretable Machine Learning – A Brief History, State-of-the-Art and Challenges. In I. Koprinska, M. Kamp, A. Appice, C. Loglisci, L. Antonie, A. Zimmermann, R. Guidotti, Ö. Özgöbek, R. P. Ribeiro, R. Gavaldà, J. Gama, L. Adilova, Y. Krishnamurthy, P. M. Ferreira, D. Malerba, I. Medeiros, M. Ceci, G. Manco, E. Masciari, … J. A. Gulla (Eds.), ECML PKDD 2020 Workshops (pp. 417–431). Springer International Publishing.
    link | pdf
  27. Molnar, C., König, G., Herbinger, J., Freiesleben, T., Dandl, S., Scholbeck, C. A., Casalicchio, G., Grosse-Wentrup, M., & Bischl, B. (2020). Pitfalls to Avoid when Interpreting Machine Learning Models. ICML Workshop XXAI: Extending Explainable AI Beyond Deep Models and Classifiers.
    link | pdf

2019

  1. Sun, X., & Bischl, B. (2019, December 6). Tutorial and Survey on Probabilistic Graphical Model and Variational Inference in Deep Reinforcement Learning. 2019 IEEE Symposium Series on Computational Intelligence (SSCI).
    link|pdf
  2. Pfisterer, F., Beggel, L., Sun, X., Scheipl, F., & Bischl, B. (2019). Benchmarking time series classification – Functional data vs machine learning approaches. In arXiv preprint arXiv:1911.07511.
    link | pdf
  3. Pfisterer, F., Thomas, J., & Bischl, B. (2019). Towards Human Centered AutoML. In arXiv preprint arXiv:1911.02391.
    link | pdf
  4. Beggel, L., Kausler, B. X., Schiegg, M., Pfeiffer, M., & Bischl, B. (2019). Time series anomaly detection based on shapelet learning. Computational Statistics, 34(3), 945–976.
    link | pdf
  5. Schmid, M., Bischl, B., & Kestler, H. A. (2019). Proceedings of Reisensburg 2016–2017. Springer.
    link
  6. Pfisterer, F., Coors, S., Thomas, J., & Bischl, B. (2019). Multi-Objective Automatic Machine Learning with AutoxgboostMC. In arXiv preprint arXiv:1908.10796.
    link | pdf
  7. Sun, X., Lin, J., & Bischl, B. (2019). ReinBo: Machine Learning pipeline search and configuration with Bayesian Optimization embedded Reinforcement Learning. CoRR.
    link | pdf
  8. Au, Q., Schalk, D., Casalicchio, G., Schoedel, R., Stachl, C., & Bischl, B. (2019). Component-Wise Boosting of Targets for Multi-Output Prediction. ArXiv Preprint ArXiv:1904.03943.
    link | pdf
  9. Probst, P., Boulesteix, A.-L., & Bischl, B. (2019). Tunability: Importance of Hyperparameters of Machine Learning Algorithms. Journal of Machine Learning Research, 20(53), 1–32.
    link | pdf
  10. Casalicchio, G., Molnar, C., & Bischl, B. (2019). Visualizing the Feature Importance for Black Box Models. In M. Berlingerio, F. Bonchi, T. Gärtner, N. Hurley, & G. Ifrim (Eds.), Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2018 (pp. 655–670). Springer International Publishing.
    link | pdf
  11. Stachl, C., Au, Q., Schoedel, R., Buschek, D., Völkel, S., Schuwerk, T., Oldemeier, M., Ullmann, T., Hussmann, H., Bischl, B., & Bühner, M. (2019). Behavioral Patterns in Smartphone Usage Predict Big Five Personality Traits. https://doi.org/10.31234/osf.io/ks4vd
  12. Goschenhofer, J., Pfister, F. M. J., Yuksel, K. A., Bischl, B., Fietzek, U., & Thomas, J. (2019). Wearable-based Parkinson’s Disease Severity Monitoring using Deep Learning. Machine Learning and Knowledge Discovery in Databases, ECML PKDD 2019, 400–415.
    link | pdf
  13. König, G., & Grosse-Wentrup, M. (2019). A Causal Perspective on Challenges for AI in Precision Medicine.
    link
  14. Schüller, N., Boulesteix, A.-L., Bischl, B., Unger, K., & Hornung, R. (2019). Improved outcome prediction across data sources through robust parameter tuning (Vol. 221).
    link | pdf
  15. Pfister, F. M. J., von Schumann, A., Bemetz, J., Thomas, J., Ceballos-Baumann, A., Bischl, B., & Fietzek, U. (2019). Recognition of subjects with early-stage Parkinson from free-living unilateral wrist-sensor data using a hierarchical machine learning model. JOURNAL OF NEURAL TRANSMISSION, 126(5), 663–663.
  16. Gijsbers, P., LeDell, E., Thomas, J., Poirier, S., Bischl, B., & Vanschoren, J. (2019). An Open Source AutoML Benchmark. CoRR.
    link | pdf
  17. Sun, X., Gossmann, A., Wang, Y., & Bischt, B. (2019). Variational Resampling Based Assessment of Deep Neural Networks under Distribution Shift. 2019 IEEE Symposium Series on Computational Intelligence (SSCI), 1344–1353.
    link|pdf
  18. Schuwerk, T., Kaltefleiter, L. J., Au, J.-Q., Hoesl, A., & Stachl, C. (2019). Enter the Wild: Autistic Traits and Their Relationship to Mentalizing and Social Interaction in Everyday Life. Journal of Autism and Developmental Disorders. https://doi.org/10.1007/s10803-019-04134-6
  19. Völkel, S. T., Schödel, R., Buschek, D., Stachl, C., Au, Q., Bischl, B., Bühner, M., & Hussmann, H. (2019). Opportunities and challenges of utilizing personality traits for personalization in HCI. Personalized Human-Computer Interaction, 31–65.
    link
  20. Sun, X., Wang, Y., Gossmann, A., & Bischl, B. (2019). Resampling-based Assessment of Robustness to Distribution Shift for Deep Neural Networks. CoRR.
    link | pdf
  21. Lang, M., Binder, M., Richter, J., Schratz, P., Pfisterer, F., Coors, S., Au, Q., Casalicchio, G., Kotthoff, L., & Bischl, B. (2019). mlr3: A modern object-oriented machine learning framework in R. Journal of Open Source Software, 4(44), 1903.
    link | pdf

2018

  1. van Rijn, J. N., Pfisterer, F., Thomas, J., Bischl, B., & Vanschoren, J. (2018, December 8). Meta Learning for Defaults–Symbolic Defaults. NeurIPS 2018 Workshop on Meta Learning.
    link | pdf
  2. Arenas, D., Barp, E., Bohner, G., Churvay, V., Kiraly, F., Lienart, T., Vollmer, S., Innes, M., & Bischl, B. (2018). Workshop contribution MLJ.
    pdf
  3. Molnar, C., Casalicchio, G., & Bischl, B. (2018). iml: An R package for Interpretable Machine Learning. The Journal of Open Source Software, 3, 786.
    link | pdf
  4. Kestler, H. A., Bischl, B., & Schmid, M. (2018). Proceedings of Reisensburg 2014–2015. Springer.
    link
  5. Bender, A., & Scheipl, F. (2018). pammtools: Piece-wise exponential Additive Mixed Modeling tools. ArXiv:1806.01042 [Stat].
    link| pdf
  6. Fossati, M., Dorigatti, E., & Giuliano, C. (2018). N-ary relation extraction for simultaneous T-Box and A-Box knowledge base augmentation. Semantic Web, 9(4), 413–439. https://doi.org/10.3233/SW-170269
  7. Horn, D., Demircioğlu, A., Bischl, B., Glasmachers, T., & Weihs, C. (2018). A Comparative Study on Large Scale Kernelized Support Vector Machines. Advances in Data Analysis and Classification, 1–17. https://doi.org/10.1007/s11634-016-0265-7
  8. Kühn, D., Probst, P., Thomas, J., & Bischl, B. (2018). Automatic Exploration of Machine Learning Experiments on OpenML. ArXiv Preprint ArXiv:1806.10961.
    link | pdf
  9. Rügamer, D., & Greven, S. (2018). Selective inference after likelihood-or test-based model selection in linear models. Statistics & Probability Letters, 140, 7–12.
  10. Schoedel, R., Au, Q., Völkel, S. T., Lehmann, F., Becker, D., Bühner, M., Bischl, B., Hussmann, H., & Stachl, C. (2018). Digital Footprints of Sensation Seeking. Zeitschrift Für Psychologie, 226(4), 232–245. https://doi.org/10.1027/2151-2604/a000342
  11. Schalk, D., Thomas, J., & Bischl, B. (2018). compboost: Modular Framework for Component-Wise Boosting. JOSS, 3(30), 967.
    link | pdf
  12. Thomas, J., Coors, S., & Bischl, B. (2018). Automatic Gradient Boosting. ICML AutoML Workshop.
    link | pdf
  13. Thomas, J., Mayr, A., Bischl, B., Schmid, M., Smith, A., & Hofner, B. (2018). Gradient boosting for distributional regression: faster tuning and improved variable selection via noncyclical updates. Statistics and Computing, 28(3), 673–687.
    link | pdf
  14. Völkel, S. T., Graefe, J., Schödel, R., Häuslschmid, R., Stachl, C., Au, Q., & Hussmann, H. (2018). I Drive My Car and My States Drive Me. Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications - AutomotiveUI ’18, 198–203. https://doi.org/10.1145/3239092.3267102
  15. Burdukiewicz, M., Karas, M., Jessen, L. E., Kosinski, M., Bischl, B., & Rödiger, S. (2018). Conference Report: Why R? 2018. The R Journal, 10(2), 572–578.
    pdf

2017

  1. Stachl, C., Hilbert, S., Au, Q., Buschek, D., De Luca, A., Bischl, B., Hussmann, H., & Bühner, M. (2017). Personality Traits Predict Smartphone Usage. European Journal of Personality, 31(6), 701–722. https://doi.org/10.1002/per.2113
  2. Cáceres, L. P., Bischl, B., & Stützle, T. (2017). Evaluating Random Forest Models for Irace. Proceedings of the Genetic and Evolutionary Computation Conference Companion, 1146–1153.
    link|pdf
  3. Casalicchio, G., Lesaffre, E., Küchenhoff, H., & Bruyneel, L. (2017). Nonlinear Analysis to Detect if Excellent Nursing Work Environments Have Highest Well-Being. Journal of Nursing Scholarship, 49(5), 537–547.
    link | pdf
  4. Bischl, B., Richter, J., Bossek, J., Horn, D., Thomas, J., & Lang, M. (2017). mlrMBO: A Modular Framework for Model-Based Optimization of Expensive Black-Box Functions. ArXiv Preprint ArXiv:1703.03373.
    link | pdf
  5. Horn, D., Dagge, M., Sun, X., & Bischl, B. (2017). First Investigations on Noisy Model-Based Multi-objective Optimization. In Evolutionary Multi-Criterion Optimization: 9th International Conference, EMO 2017, Münster, Germany, March 19-22, 2017, Proceedings (pp. 298–313). Springer International Publishing. https://doi.org/10.1007/978-3-319-54157-0_21
  6. Beggel, L., Sun, X., & Bischl, B. (2017). mlrFDA: an R toolbox for functional data analysis. Ulmer Informatik-Berichte, 15.
    pdf
  7. Horn, D., Bischl, B., Demircioglu, A., Glasmachers, T., Wagner, T., & Weihs, C. (2017). Multi-objective selection of algorithm portfolios. Archives of Data Science.
    link
  8. Thomas, J., Hepp, T., Mayr, A., & Bischl, B. (2017). Probing for sparse and fast variable selection with model-based boosting. Computational and Mathematical Methods in Medicine, 2017.
    link | pdf
  9. Kotthaus, H., Richter, J., Lang, A., Thomas, J., Bischl, B., Marwedel, P., Rahnenführer, J., & Lang, M. (2017). RAMBO: Resource-Aware Model-Based Optimization with Scheduling for Heterogeneous Runtimes and a Comparison with Asynchronous Model-Based Optimization. International Conference on Learning and Intelligent Optimization, 180–195.
    link | pdf
  10. Lang, M., Bischl, B., & Surmann, D. (2017). batchtools: Tools for R to work on batch systems. The Journal of Open Source Software, 2(10).
    link
  11. Probst, P., Au, Q., Casalicchio, G., Stachl, C., & Bischl, B. (2017). Multilabel Classification with R Package mlr. The R Journal, 9(1), 352–369.
    link | pdf
  12. Casalicchio, G., Bossek, J., Lang, M., Kirchhoff, D., Kerschke, P., Hofner, B., Seibold, H., Vanschoren, J., & Bischl, B. (2017). OpenML: An R package to connect to the machine learning platform OpenML. Computational Statistics, 977–991.
    link | pdf

2016

  1. Horn, D., & Bischl, B. (2016). Multi-objective Parameter Configuration of Machine Learning Algorithms using Model-Based Optimization. 2016 IEEE Symposium Series on Computational Intelligence (SSCI), 1–8.
    link|pdf
  2. Bischl, B., Lang, M., Kotthoff, L., Schiffner, J., Richter, J., Studerus, E., Casalicchio, G., & Jones, Z. M. (2016). mlr: Machine Learning in R. The Journal of Machine Learning Research, 17(170), 1–5.
    link | pdf
  3. Bauer, N., Friedrichs, K., Bischl, B., & Weihs, C. (2016, August 4). Fast Model Based Optimization of Tone Onset Detection by Instance Sampling. Data Analysis, Machine Learning and Knowledge Discovery.
    link
  4. Weihs, C., Horn, D., & Bischl, B. (2016). Big data Classification: Aspects on Many Features and Many Observations. In A. F. X. Wilhelm & H. A. Kestler (Eds.), Analysis of Large and Complex Data (pp. 113–122). Springer International Publishing. https://doi.org/10.1007/978-3-319-25226-1_10
  5. Bischl, B., Kerschke, P., Kotthoff, L., Lindauer, M., Malitsky, Y., Frechétte, A., Hoos, H., Hutter, F., Leyton-Brown, K., Tierney, K., & Vanschoren, J. (2016). ASlib: A Benchmark Library for Algorithm Selection. Artificial Intelligence, 237, 41–58.
    link
  6. Bischl, B., Kühn, T., & Szepannek, G. (2016). On Class Imbalance Correction for Classification Algorithms in Credit Scoring. In M. Lübbecke, A. Koster, P. Letmathe, R. Madlener, B. Peis, & G. Walther (Eds.), Operations Research Proceedings 2014 (pp. 37–43). Springer International Publishing.
    link|pdf
  7. Demircioglu, A., Horn, D., Glasmachers, T., Bischl, B., & Weihs, C. (2016). Fast model selection by limiting SVM training times (Number arxiv:1302.1602.03368v1). arxiv.org.
    link
  8. Beggel, L., Kausler, B. X., Schiegg, M., & Bischl, B. (2016). Anomaly Detection with Shapelet-Based Feature Learning for Time Series. Ulmer Informatik-Berichte, 25.
    link | pdf
  9. Degroote, H., Bischl, B., Kotthoff, L., & De Causmaecker, P. (2016). Reinforcement Learning for Automatic Online Algorithm Selection - an Empirical Study. ITAT 2016 Proceedings, 1649, 93–101.
    link
  10. Feilke, M., Bischl, B., Schmid, V. J., & Gertheiss, J. (2016). Boosting in non-linear regression models with an application to DCE-MRI data. Methods of Information in Medicine.
    link|pdf
  11. Feilke, M., Bischl, B., Schmid, V. J., & Gertheiss, J. (2016). Boosting in nonlinear regression models with an application to DCE-MRI data. Methods of Information in Medicine, 55(01), 31–41.
  12. Degroote, H., Bischl, B., Kotthoff, L., & Causmaecker, P. D. (2016). Reinforcement Learning for Automatic Online Algorithm Selection - an Empirical Study. Proceedings of the 16th ITAT Conference Information Technologies - Applications and Theory, Tatranské Matliare, Slovakia, September 15-19, 2016., 93–101.
    link
  13. Schiffner, J., Bischl, B., Lang, M., Richter, J., Jones, Z. M., Probst, P., Pfisterer, F., Gallo, M., Kirchhoff, D., Kühn, T., Thomas, J., & Kotthoff, L. (2016). mlr Tutorial.
    link | pdf
  14. Rietzler, M., Geiselhart, F., Thomas, J., & Rukzio, E. (2016). FusionKit: a generic toolkit for skeleton, marker and rigid-body tracking. Proceedings of the 8th ACM SIGCHI Symposium on Engineering Interactive Computing Systems, 73–84.
    link
  15. Richter, J., Kotthaus, H., Bischl, B., Marwedel, P., Rahnenführer, J., & Lang, M. (2016, May 29). Faster Model-Based Optimization through Resource-Aware Scheduling Strategies. Proceedings of the 10th Learning and Intelligent OptimizatioN Conference (LION 10).
    link|pdf

2015

  1. Casalicchio, G., Bischl, B., Boulesteix, A.-L., & Schmid, M. (2015). The residual-based predictiveness curve: A visual tool to assess the performance of prediction models. Biometrics, 72(2), 392–401.
    link | pdf
  2. Vanschoren, J., Rijn, J. N., & Bischl, B. (2015). Taking machine learning research online with OpenML. In W. Fan, A. Bifet, Q. Yang, & P. S. Yu (Eds.), Proceedings of the 4th International Workshop on Big Data, Streams and Heterogeneous Source Mining: Algorithms, Systems, Programming Models and Applications (Vol. 41, pp. 1–4). PMLR.
    link|pdf
  3. Mantovani, R. G., Rossi, A. L. D., Vanschoren, J., Bischl, B., & Carvalho, A. C. P. L. F. (2015). To tune or not to tune: Recommending when to adjust SVM hyper-parameters via meta-learning. 2015 International Joint Conference on Neural Networks (IJCNN), 1–8. https://doi.org/10.1109/IJCNN.2015.7280644
  4. Bossek, J., Bischl, B., Wagner, T., & Rudolph, G. (2015). Learning feature-parameter mappings for parameter tuning via the profile expected improvement. Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, 1319–1326.
    link|pdf
  5. Brockhoff, D., Bischl, B., & Wagner, T. (2015). The Impact of Initial Designs on the Performance of MATSuMoTo on the Noiseless BBOB-2015 Testbed: A Preliminary Study. Proceedings of the Companion Publication of the 2015 Annual Conference on Genetic and Evolutionary Computation, 1159–1166. https://doi.org/10.1145/2739482.2768470
  6. Horn, D., Wagner, T., Biermann, D., Weihs, C., & Bischl, B. (2015). Model-Based Multi-Objective Optimization: Taxonomy, Multi-Point Proposal, Toolbox and Benchmark. In A. Gaspar-Cunha, C. Henggeler Antunes, & C. C. Coello (Eds.), Evolutionary Multi-Criterion Optimization (EMO) (Vol. 9018, pp. 64–78). Springer. https://doi.org/10.1007/978-3-319-15934-8_5
  7. Casalicchio, G., Tutz, G., & Schauberger, G. (2015). Subject-specific Bradley–Terry–Luce models with implicit variable selection. Statistical Modelling, 15(6), 526–547.
    link | pdf
  8. Kotthaus, H., Korb, I., Lang, M., Bischl, B., Rahnenführer, J., & Marwedel, P. (2015). Runtime and memory consumption analyses for machine learning R programs. Journal of Statistical Computation and Simulation, 85(1), 14–29. https://doi.org/10.1080/00949655.2014.925192
  9. Lang, M., Kotthaus, H., Marwedel, P., Weihs, C., Rahnenführer, J., & Bischl, B. (2015). Automatic model selection for high-dimensional survival analysis. Journal of Statistical Computation and Simulation, 85(1), 62–76. https://doi.org/10.1080/00949655.2014.929131
  10. Bischl, B., Lang, M., Mersmann, O., Rahnenführer, J., & Weihs, C. (2015). BatchJobs and BatchExperiments: Abstraction Mechanisms for Using R in Batch Environments. Journal of Statistical Software, 64(11), 1–25.
    link
  11. Bischl, B. (2015). Applying Model-Based Optimization to Hyperparameter Optimization in Machine Learning. Proceedings of the 2015 International Conference on Meta-Learning and Algorithm Selection - Volume 1455, 1.
    link|pdf
  12. Mersmann, O., Preuss, M., Trautmann, H., Bischl, B., & Weihs, C. (2015). Analyzing the BBOB Results by Means of Benchmarking Concepts. Evolutionary Computation Journal, 23(1), 161–185. https://doi.org/doi:10.1162/EVCO_a_00134
  13. Vanschoren, J., van Rijn, J. N., Bischl, B., Casalicchio, G., & Feurer, M. (2015). OpenML: A Networked Science Platform for Machine Learning. 2015 ICML Workshop on Machine Learning Open Source Software (MLOSS 2015), 1–3.
    link | pdf
  14. Vanschoren, J., Bischl, B., Hutter, F., Sebag, M., Kegl, B., Schmid, M., Napolitano, G., & Wolstencroft, K. (2015). Towards a data science collaboratory. Lecture Notes in Computer Science (IDA 2015), 9385.
    pdf

2014

  1. Bischl, B., Schiffner, J., & Weihs, C. (2014). Benchmarking Classification Algorithms on High-Performance Computing Clusters. In M. Spiliopoulou, L. Schmidt-Thieme, & R. Janning (Eds.), Data Analysis, Machine Learning and Knowledge Discovery (pp. 23–31). Springer. https://doi.org/10.1007/978-3-319-01595-8_3
  2. Bischl, B., Wessing, S., Bauer, N., Friedrichs, K., & Weihs, C. (2014). MOI-MBO: Multiobjective Infill for Parallel Model-Based Optimization. In P. M. Pardalos, M. G. C. Resende, C. Vogiatzis, & J. L. Walteros (Eds.), Learning and Intelligent Optimization (pp. 173–186). Springer. https://doi.org/10.1007/978-3-319-09584-4_17
  3. Kerschke, P., Preuss, M., Hernández, C., Schütze, O., Sun, J.-Q., Grimme, C., Rudolph, G., Bischl, B., & Trautmann, H. (2014). Cell Mapping Techniques for Exploratory Landscape Analysis. Proceedings of the EVOLVE 2014: A Bridge between Probability, Set Oriented Numerics, and Evolutionary Computation, 115–131.
    link | pdf
  4. Meyer, O., Bischl, B., & Weihs, C. (2014). Support Vector Machines on Large Data Sets: Simple Parallel Approaches. In M. Spiliopoulou, L. Schmidt-Thieme, & R. Janning (Eds.), Data Analysis, Machine Learning and Knowledge Discovery (pp. 87–95). Springer. https://doi.org/10.1007/978-3-319-01595-8_10
  5. Vanschoren, J., van Rijn, J. N., Bischl, B., & Torgo, L. (2014). OpenML: Networked Science in Machine Learning. SIGKDD Explorations Newsletter, 15(2), 49–60.
    link | pdf
  6. Vatolkin, I., Bischl, B., Rudolph, G., & Weihs, C. (2014). Statistical Comparison of Classifiers for Multi-objective Feature Selection in Instrument Recognition. In M. Spiliopoulou, L. Schmidt-Thieme, & R. Janning (Eds.), Data Analysis, Machine Learning and Knowledge Discovery (pp. 171–178). Springer. https://doi.org/10.1007/978-3-319-01595-8_19

2013

  1. Hess, S., Wagner, T., & Bischl, B. (2013). PROGRESS: Progressive Reinforcement-Learning-Based Surrogate Selection. In G. Nicosia & P. Pardalos (Eds.), Learning and Intelligent Optimization (pp. 110–124). Springer. https://doi.org/10.1007/978-3-642-44973-4_13
  2. Mersmann, O., Bischl, B., Trautmann, H., Wagner, M., Bossek, J., & Neumann, F. (2013). A novel feature-based approach to characterize algorithm performance for the traveling salesperson problem. Annals of Mathematics and Artificial Intelligence, 69, 151–182. https://doi.org/10.1007/s10472-013-9341-2
  3. van Rijn, J., Bischl, B., Torgo, L., Gao, G., Umaashankar, V., Fischer, S., Winter, P., Wiswedel, B., Berthold, M. R., & Vanschoren, J. (2013). OpenML: A Collaborative Science Platform. Machine Learning and Knowledge Discovery in Databases, 645–649. https://doi.org/10.1007/978-3-642-40994-3_46
  4. Bischl, B., Schiffner, J., & Weihs, C. (2013). Benchmarking local classification methods. Computational Statistics, 28(6), 2599–2619. https://doi.org/10.1007/s00180-013-0420-y
  5. Bergmann, S., Ziegler, N., Bartels, T., Hübel, J., Schumacher, C., Rauch, E., Brandl, S., Bender, A., Casalicchio, G., Krautwald-Junghanns, M.-E., & others. (2013). Prevalence and severity of foot pad alterations in German turkey poults during the early rearing phase. Poultry Science, 92(5), 1171–1176.
    link | pdf
  6. Nallaperuma, S., Wagner, M., Neumann, F., Bischl, B., Mersmann, O., & Trautmann, H. (2013, January 16). A Feature-Based Comparison of Local Search and the Christofides Algorithm for the Travelling Salesperson Problem. Foundations of Genetic Algorithms (FOGA). https://doi.org/10.1145/2460239.2460253
  7. van Rijn, J., Umaashankar, V., Fischer, S., Bischl, B., Torgo, L., Gao, B., Winter, P., Wiswedel, B., Berthold, M. R., & Vanschoren, J. (2013). A RapidMiner extension for Open Machine Learning. RapidMiner Community Meeting and Conference (RCOMM), 59–70.
    link | pdf
  8. Ziegler, N., Bergmann, S., Huebei, J., Bartels, T., Schumacher, C., Bender, A., Casalicchio, G., Kuechenhoff, H., Krautwald-Junghanns, M.-E., & Erhard, M. (2013). Climate parameters and the influence on the foot pad health status of fattening turkeys BUT 6 during the early rearing phase. BERLINER UND MUNCHENER TIERARZTLICHE WOCHENSCHRIFT, 126(5–6), 181–188.
    link | pdf

2012

  1. Nallaperuma, S., Wagner, M., Neumann, F., Bischl, B., Mersmann, O., & Trautmann, H. (2012). Features of Easy and Hard Instances for Approximation Algorithms and the Traveling Salesperson Problem. Citeseer.
    link | pdf
  2. Bischl, B., Mersmann, O., Trautmann, H., & Preuss, M. (2012). Algorithm Selection Based on Exploratory Landscape Analysis and Cost-Sensitive Learning. Proceedings of the 14th Annual Conference on Genetic and Evolutionary Computation, 313–320. https://doi.org/10.1145/2330163.2330209
  3. Koch, P., Bischl, B., Flasch, O., Bartz-Beielstein, T., Weihs, C., & Konen, W. (2012). Tuning and evolution of support vector kernels. Evolutionary Intelligence, 5(3), 153–170. https://doi.org/10.1007/s12065-012-0073-8
  4. Mersmann, O., Bischl, B., Bossek, J., Trautmann, H., M., W., & Neumann, F. (2012). Local Search and the Traveling Salesman Problem: A Feature-Based Characterization of Problem Hardness. Learning and Intelligent Optimization Conference (LION), 115–129. https://doi.org/10.1007/978-3-642-34413-8_9
  5. Schiffner, J., Bischl, B., & Weihs, C. (2012). Bias-variance analysis of local classification methods. In W. Gaul, A. Geyer-Schulz, L. Schmidt-Thieme, & J. Kunze (Eds.), Challenges at the Interface of Data Analysis, Computer Science, and Optimization (pp. 49–57). Springer. https://doi.org/10.1007/978-3-642-24466-7_6
  6. Weihs, C., O., M., Bischl, B., Fritsch, A., Trautmann, H., Karbach, T.-M., & Spaan, B. (2012). A Case Study on the Use of Statistical Classification Methods in Particle Physics. Challenges at the Interface of Data Analysis, Computer Science, and Optimization, 69–77.
    link
  7. Bischl, B., Lang, M., Mersmann, O., Rahnenfuehrer, J., & Weihs, C. (2012). Computing on high performance clusters with R: Packages BatchJobs and BatchExperiments. SFB 876, TU Dortmund University.
    link
  8. Bischl, B., Mersmann, O., Trautmann, H., & Weihs, C. (2012). Resampling Methods for Meta-Model Validation with Recommendations for Evolutionary Computation. Evolutionary Computation, 20(2), 249–275. https://doi.org/10.1162/EVCO_a_00069

2011

  1. Mersmann, O., Bischl, B., Trautmann, H., Preuss, M., Weihs, C., & Rudolph, G. (2011). Exploratory Landscape Analysis. In N. Krasnogor (Ed.), Proceedings of the 13th annual conference on genetic and evolutionary computation (GECCO ’11) (pp. 829–836). Association for Computing Machinery. https://doi.org/10.1145/2001576.2001690
  2. Blume, H., Bischl, B., Botteck, M., Igel, C., Martin, R., Roetter, G., Rudolph, G., Theimer, W., Vatolkin, I., & Weihs, C. (2011). Huge Music Archives on Mobile Devices. IEEE Signal Processing Magazine, 28(4), 24–39. https://doi.org/10.1109/MSP.2011.940880
  3. Koch, P., Bischl, B., Flasch, O., Bartz-Beielstein, T., & Konen, W. (2011). On the Tuning and Evolution of Support Vector Kernels. Research Center CIOP (Computational Intelligence, Optimization and Data Mining).
    link
  4. Weihs, C., Friedrichs, K., & Bischl, B. (2011). Statistics for hearing aids: Auralization. Second Bilateral German-Polish Symposium on Data Analysis and Its Applications (GPSDAA).

2010

  1. Bischl, B., Vatolkin, I., & Preuss, M. (2010). Selecting Small Audio Feature Sets in Music Classification by Means of Asymmetric Mutation. Parallel Problem Solving from Nature, PPSN XI, 6238, 314–323.
    link
  2. Szepannek, G., Gruhne, M., Bischl, B., Krey, S., Harczos, T., Klefenz, F., Dittmar, C., & Weihs, C. (2010). Perceptually Based Phoneme Recognition in Popular Music. In H. Locarek-Junge & C. Weihs (Eds.), Classification as a Tool for Research (Vol. 40, pp. 751–758). Springer. https://doi.org/10.1007/978-3-642-10745-0_83
  3. Bischl, B., Eichhoff, M., & Weihs, C. (2010). Selecting Groups of Audio Features by Statistical Tests and the Group Lasso. 9. ITG Fachtagung Sprachkommunikation.
    link
  4. Bischl, B., Mersmann, O., & Trautmann, H. (2010). Resampling Methods in Model Validation. In T. Bartz-Beielstein, M. Chiarandini, L. Paquete, & M. Preuss (Eds.), WEMACS – Proceedings of the Workshop on Experimental Methods for the Assessment of Computational Systems, Technical Report TR 10-2-007. Department of Computer Science, TU Dortmund University.
    link

2009

  1. Bischl, B., Ligges, U., & Weihs, C. (2009). Frequency estimation by DFT interpolation: A comparison of methods. SFB 475, Faculty of Statistics, TU Dortmund, Germany.
    link | pdf
  2. Szepannek, G., Bischl, B., & Weihs, C. (2009). On the combination of locally optimal pairwise classifiers. Engineering Applications of Artificial Intelligence, 22(1), 79–85. https://doi.org/https://doi.org/10.1016/j.engappai.2008.04.009

2008

  1. Szepannek, G., Bischl, B., & Weihs, C. (2008). On the Combination of Locally Optimal Pairwise Classifiers. Journal of Engineering Applications of Artificial Intelligence, 22(1), 79–85.
    link

2007