Research

Research groups

Publications

A full list of publications in BibTex format is available here

[2023, 2022, 2021, 2020, 2019, 2018, 2017, 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007]

2023

  1. Dandl, S., Casalicchio, G., Bischl, B., & Bothmann, L. (2023). Interpretable Regional Descriptors: Hyperbox-Based Local Explanations. In D. Koutra, C. Plant, M. Gomez Rodriguez, E. Baralis, & F. Bonchi (Eds.), Machine Learning and Knowledge Discovery in Databases: Research Track (pp. 479–495). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-43418-1_29
  2. Bothmann, L., Wimmer, L., Charrakh, O., Weber, T., Edelhoff, H., Peters, W., Nguyen, H., Benjamin, C., & Menzel, A. (2023). Automated wildlife image classification: An active learning tool for ecological applications. Ecological Informatics, 77(102231).
    link|pdf
  3. Stüber, A. T., Coors, S., Schachtner, B., Weber, T., Rügamer, D., Bender, A., Mittermeier, A., Öcal, O., Seidensticker, M., Ricke, J., & others. (2023). A Comprehensive Machine Learning Benchmark Study for Radiomics-Based Survival Analysis of CT Imaging Data in Patients With Hepatic Metastases of CRC. Investigative Radiology, 10–1097.
  4. Bothmann, L., Dandl, S., & Schomaker, M. (2023). Causal Fair Machine Learning via Rank-Preserving Interventional Distributions. ArXiv:2307.12797 [Cs, Stat].
    link
  5. Kolb, C., Müller, C. L., Bischl, B., & Rügamer, D. (2023). Smoothing the Edges: A General Framework for Smooth Optimization in Sparse Regularization using Hadamard Overparametrization. ArXiv Preprint ArXiv:2307.03571.
    link|pdf
  6. Kolb, C., Bischl, B., Müller, C. L., & Rügamer, D. (2023, July 1). Sparse Modality Regression. Proceedings of the 37th International Workshop on Statistical Modelling, IWSM 2023.
    link|pdf
  7. Wiese, J. G., Wimmer, L., Papamarkou, T., Bischl, B., Günnemann, S., & Rügamer, D. (2023, June 6). Towards Efficient Posterior Sampling in Deep Neural Networks via Symmetry Removal. Machine Learning and Knowledge Discovery in Databases (ECML-PKDD).
    link|pdf
  8. Rügamer, D. (2023). A New PHO-rmula for Improved Performance of Semi-Structured Networks. ICML 2023.
  9. Ott, F., Heublein, L., Rügamer, D., Bischl, B., & Mutschler, C. (2023). Fusing Structure from Motion and Simulation-Augmented Pose Regression from Optical Flow for Challenging Indoor Environments. ArXiv:2304.07250.
    link|pdf
  10. Rath, K., Rügamer, D., Bischl, B., von Toussaint, U., & Albert, C. (2023). Dependent state space Student-t processes for imputation and data augmentation in plasma diagnostics. Contributions to Plasma Physics.
  11. Dandl, S., Hofheinz, A., Binder, M., Bischl, B., & Casalicchio, G. (2023). counterfactuals: An R Package for Counterfactual Explanation Methods. ArXiv:2304.06569.
    link|pdf
  12. Weber, T., Ingrisch, M., Bischl, B., & Rügamer, D. (2023, March 20). Cascaded Latent Diffusion Models for High-Resolution Chest X-ray Synthesis. Advances in Knowledge Discovery and Data Mining: 27th Pacific-Asia Conference, PAKDD 2023.
    link|pdf
  13. Weerts, H., Pfisterer, F., Feurer, M., Eggensperger, K., Bergman, E., Awad, N., Vanschoren, J., Pechenizkiy, M., Bischl, B., & Hutter, F. (2023). Can Fairness be Automated? Guidelines and Opportunities for Fairness-aware AutoML. ArXiv:2303.08485 [Cs.AI].
    link|pdf
  14. Ott, F., Raichur, N. L., Rügamer, D., Feigl, T., Neumann, H., Bischl, B., & Mutschler, C. (2023). Benchmarking Visual-Inertial Deep Multimodal Fusion for Relative Pose Regression and Odometry-aided Absolute Pose Regression. ArXiv:2208.00919.
    link|pdf
  15. Weber, T., Ingrisch, M., Bischl, B., & Rügamer, D. (2023). Implicit Embeddings via GAN Inversion for High Resolution Chest Radiographs. MICCAI Workshop on Medical Applications with Disentanglements 2022.
    link|pdf
  16. Bothmann, L., Peters, K., & Bischl, B. (2023). What Is Fairness? Philosophical Considerations and Implications For FairML. ArXiv:2205.09622 [Cs, Stat].
    link
  17. Pielok, T., Bischl, B., & Rügamer, D. (2023, January 23). Approximate Bayesian Inference with Stein Functional Variational Gradient Descent. International Conference on Learning Representations.
    link|pdf
  18. Dorigatti, E., Bischl, B., & Rügamer, D. (2023, January 23). Frequentist Uncertainty Quantification in Semi-Structured Neural Networks. International Conference on Artificial Intelligence and Statistics.
  19. Jeblick, K., Schachtner, B., Dexl, J., Mittermeier, A., Stüber, A. T., Topalis, J., Weber, T., Wesp, P., Sabel, B., Ricke, J., & Ingrisch, M. (2023). ChatGPT Makes Medicine Easy to Swallow: An Exploratory Case Study on Simplified Radiology Reports. ArXiv Preprint ArXiv:2212.14882.
    link|pdf
  20. Wimmer, L., Sale, Y., Hofman, P., Bischl, B., & Hüllermeier, E. (2023). Quantifying Aleatoric and Epistemic Uncertainty in Machine Learning: Are Conditional Entropy and Mutual Information Appropriate Measures? 39th Conference on Uncertainty in Artificial Intelligence (UAI 2023).
    link|pdf
  21. Rügamer, D., Kolb, C., & Klein, N. (2022). Semi-Structured Distributional Regression. The American Statistician.
    link|pdf
  22. Fischer, S., Harutyunyan, L., Feurer, M., & Bischl, B. (2023). OpenML-CTR23 – A curated tabular regression benchmarking suite. AutoML Conference 2023 (Workshop).
    link|pdf
  23. Gündüz, H. A., Binder, M., To, X.-Y., Mreches, R., Bischl, B., McHardy, A. C., Münch, P. C., & Rezaei, M. (2023). A self-supervised deep learning method for data-efficient training in genomics. Communications Biology, 6(1), 928. https://doi.org/10.1038/s42003-023-05310-2
  24. Stüber, A. T., Coors, S., Schachtner, B., Weber, T., Rügamer, D., Bender, A., Mittermeier, A., Öcal, O., Seidensticker, M., Ricke, J., Bischl, B., & Ingrisch, M. (2023). A Comprehensive Machine Learning Benchmark Study for Radiomics-Based Survival Analysis of CT Imaging Data in Patients With Hepatic Metastases of CRC. Investigative Radiology, 10–1097.
  25. Rauch, L., Aßenmacher, M., Huseljic, D., Wirth, M., Bischl, B., & Sick, B. (2023). ActiveGLAE: A Benchmark for Deep Active Learning with Transformers. ArXiv Preprint ArXiv:2306.10087.
  26. Vahidi, A., Wimmer, L., Gündüz, H. A., Bischl, B., Hüllermeier, E., & Rezaei, M. (2023). Diversified Ensemble of Independent Sub-Networks for Robust Self-Supervised Representation Learning. ArXiv Preprint ArXiv:2308.14705.
  27. Schneider, L., Bischl, B., & Thomas, J. (2023). Multi-Objective Optimization of Performance and Interpretability of Tabular Supervised Machine Learning Models. Proceedings of the Genetic and Evolutionary Computation Conference, 538–547.
    link | pdf
  28. Münch, P., Mreches, R., To, X.-Y., Gündüz, H. A., Moosbauer, J., Klawitter, S., Deng, Z.-L., Robertson, G., Rezaei, M., Asgari, E., Franzosa, E., Huttenhower, C., Bischl, B., McHardy, A., & Binder, M. (2023). A platform for deep learning on (meta)genomic sequences (preprint). https://doi.org/10.21203/rs.3.rs-2527258/v1
  29. Bischl, B., Binder, M., Lang, M., Pielok, T., Richter, J., Coors, S., Thomas, J., Ullmann, T., Becker, M., Boulesteix, A.-L., Deng, D., & Lindauer, M. (2023). Hyperparameter Optimization: Foundations, Algorithms, Best Practices, and Open Challenges. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, e1484. https://doi.org/10.1002/widm.1484
  30. König, G., Freiesleben, T., & Grosse-Wentrup, M. (2023). Improvement-focused Causal Recourse (ICR). 37th AAAI Conference.
  31. Luther, C., König, G., & Grosse-Wentrup, M. (2023). Efficient SAGE Estimation via Causal Structure Learning. AISTATS.
  32. Feurer, M., Eggensperger, K., Bergman, E., Pfisterer, F., Bischl, B., & Hutter, F. (2023). Mind the Gap: Measuring Generalization Performance Across Multiple Objectives. In B. Crémilleux, S. Hess, & S. Nijssen (Eds.), Advances in Intelligent Data Analysis XXI. IDA 2023. (Vol. 13876, pp. 130–142). Springer, Cham.
    link|arXiv|pdf

2022

  1. Dorigatti, E., Bischl, B., & Schubert, B. (2022). Improved proteasomal cleavage prediction with positive-unlabeled learning. Extended Abstract Presented at Machine Learning for Health (ML4H) Symposium 2022, November 28th, 2022, New Orleans, United States & Virtual, abs/2209.07527.
    link | pdf
  2. Kook, L., Baumann, P. F. M., Dürr, O., Sick, B., & Rügamer, D. (2022). Estimating Conditional Distributions with Neural Networks using R package deeptrafo. ArXiv Preprint ArXiv:2211.13665.
    link|pdf
  3. Rügamer, D., Baumann, P. F. M., Kneib, T., & Hothorn, T. (2022). Probabilistic Time Series Forecasts with Autoregressive Transformation Models. Statistics & Computing.
    link|pdf
  4. Rügamer, D., Pfisterer, F., Bischl, B., & Grün, B. (2022). Mixture of Experts Distributional Regression: Implementation Using Robust Estimation with Adaptive First-order Methods. ArXiv Preprint ArXiv:2211.09875.
    link|pdf
  5. Ziegler, I., Ma, B., Nie, E., Bischl, B., Rügamer, D., Schubert, B., & Dorigatti, E. (2022). What cleaves? Is proteasomal cleavage prediction reaching a ceiling? Extended Abstract Presented at the NeurIPS Learning Meaningful Representations of Life (LMRL) Workshop 2022, abs/2209.07527.
    link | pdf
  6. Ott, F., Rügamer, D., Heublein, L., Bischl, B., & Mutschler, C. (2022, October 24). Representation Learning for Tablet and Paper Domain Adaptation in favor of Online Handwriting Recognition. MPRSS 2022.
  7. Ziegler, I., Ma, B., Nie, E., Bischl, B., Rügamer, D., Schubert, B., & Dorigatti, E. (2022, October 24). What cleaves? Is proteasomal cleavage prediction reaching a ceiling? NeurIPS 2022 Workshop on Learning Meaningful Representations of Life (LMRL).
    link|pdf
  8. Kaiser, P., Rügamer, D., & Kern, C. (2022, October 24). Uncertainty as a key to fair data-driven decision making. NeurIPS 2022 Workshop on Trustworthy and Socially Responsible Machine Learning (TSRML).
    link|pdf
  9. Rezaei, M., Dorigatti, E., Rügamer, D., & Bischl, B. (2022, October 21). Learning Statistical Representation with Joint Deep Embedded Clustering. ArXiv Preprint ArXiv:2109.05232.
    link|pdf
  10. Schalk, D., Bischl, B., & Rügamer, D. (2022). Privacy-Preserving and Lossless Distributed Estimation of High-Dimensional Generalized Additive Mixed Models. ArXiv Preprint ArXiv:2210.07723.
    link|pdf
  11. Dandl, S., Bender, A., & Hothorn, T. (2022). Heterogeneous Treatment Effect Estimation for Observational Data using Model-based Forests [ArXiv]. arXiv:2210.02836.
    link
  12. Bothmann, L. (2022). Künstliche Intelligenz in der Strafverfolgung. In K. Peters (Ed.), Cyberkriminalität. LMU Munich.
    link
  13. Ghada, W., Casellas, E., Herbinger, J., Garcia-Benadí, A., Bothmann, L., Estrella, N., Bech, J., & Menzel, A. (2022). Stratiform and Convective Rain Classification Using Machine Learning Models and Micro Rain Radar. Remote Sensing, 14(18).
    link
  14. Ott, F., Rügamer, D., Heublein, L., Hamann, T., Barth, J., Bischl, B., & Mutschler, C. (2022). Benchmarking Online Sequence-to-Sequence and Character-based Handwriting Recognition from IMU-Enhanced Pens. International Journal on Document Analysis and Recognition (IJDAR).
    link|pdf
  15. Schiele, P., Berninger, C., & Rügamer, D. (2022). ARMA Cell: A Modular and Effective Approach for Neural Autoregressive Modeling. ArXiv Preprint ArXiv:2208.14919.
    link|pdf
  16. Schalk, D., Bischl, B., & Rügamer, D. (2022). Accelerated Componentwise Gradient Boosting using Efficient Data Representation and Momentum-based Optimization. Journal of Computational and Graphical Statistics.
    link | pdf
  17. Rath, K., Rügamer, D., Bischl, B., von Toussaint, U., Rea, C., Maris, A., Granetz, R., & Albert, C. (2022). Data augmentation for disruption prediction via robust surrogate models. Journal of Plasma Physics.
  18. Dandl, S., Pfisterer, F., & Bischl, B. (2022). Multi-Objective Counterfactual Fairness. Proceedings of the Genetic and Evolutionary Computation Conference Companion, 328–331.
    link
  19. Mittermeier, M., Weigert, M., Rügamer, D., Küchenhoff, H., & Ludwig, R. (2022). A Deep Learning Version of Hess & Brezowskys Classification of Großwetterlagen over Europe: Projection of Future Changes in a CMIP6 Large Ensemble. Environmental Research Letters.
  20. Ott, F., Rügamer, D., Heublein, L., Bischl, B., & Mutschler, C. (2022, June 29). Domain Adaptation for Time-Series Classification to Mitigate Covariate Shift. ACM Multimedia.
    link|pdf
  21. Dandl, S., Hothorn, T., Seibold, H., Sverdrup, E., Wager, S., & Zeileis, A. (2022). What Makes Forest-Based Heterogeneous Treatment Effect Estimators Work? In arXiv:2206.10323.
    link
  22. Rügamer, D., Bender, A., Wiegrebe, S., Racek, D., Bischl, B., Müller, C., & Stachl, C. (2022, June 14). Factorized Structured Regression for Large-Scale Varying Coefficient Models. Machine Learning and Knowledge Discovery in Databases (ECML-PKDD).
    link|pdf
  23. Beaudry, G., Drouin, O., Gravel, J., Smyrnova, A., Bender, A., Orri, M., Geoffroy, M.-C., & Chadi, N. (2022). A Comparative Analysis of Pediatric Mental Health-Related Emergency Department Utilization in Montréal, Canada, before and during the COVID-19 Pandemic. Annals of General Psychiatry, 21(1), 17.
    link|pdf
  24. Klaß, A., Lorenz, S., Lauer-Schmaltz, M., Rügamer, D., Bischl, B., Mutschler, C., & Ott, F. (2022, June 4). Uncertainty-aware Evaluation of Time-Series Classification for Online Handwriting Recognition with Domain Shift. IJCAI-ECAI 2022, 1st International Workshop on Spatio-Temporal Reasoning and Learning.
  25. Fritz, C., Nicola, G. D., Günther, F., Rügamer, D., Rave, M., Schneble, M., Bender, A., Weigert, M., Brinks, R., Hoyer, A., Berger, U., Küchenhoff, H., & Kauermann, G. (2022). Challenges in Interpreting Epidemiological Surveillance Data - Experiences from Germany. Journal of Computational & Graphical Statistics.
  26. Rügamer, D. (2022). Additive Higher-Order Factorization Machines. ArXiv Preprint ArXiv:2205.14515.
    link|pdf
  27. Rügamer, D., Kolb, C., Fritz, C., Pfisterer, F., Kopper, P., Bischl, B., Shen, R., Bukas, C., de Andrade e Sousa, L. B., Thalmeier, D., Baumann, P., Kook, L., Klein, N., & Müller, C. L. (2022). deepregression: a Flexible Neural Network Framework for Semi-Structured Deep Distributional Regression. Journal of Statistical Software (Provisionally Accepted).
    link|pdf
  28. Schalk, D., Hoffmann, V., Bischl, B., & Mansmann, U. (2022). Distributed non-disclosive validation of predictive models by a modified ROC-GLM. ArXiv Preprint ArXiv:2202.10828.
    link | pdf
  29. Liew, B. X. W., Kovacs, F. M., Rügamer, D., & Royuela, A. (2022). Machine learning for prognostic modelling in individuals with non-specific neck pain. European Spine Journal.
  30. Fritz, C., Dorigatti, E., & Rügamer, D. (2022). Combining Graph Neural Networks and Spatio-temporal Disease Models to Predict COVID-19 Cases in Germany. Scientific Reports, 12, 2045–2322.
    link|pdf
  31. Sonabend, R., Bender, A., & Vollmer, S. (2022). Avoiding C-hacking When Evaluating Survival Distribution Predictions with Discrimination Measures (Number arXiv:2112.04828). arXiv.
    link|pdf
  32. Rügamer, D., Baumann, P., & Greven, S. (2022). Selective Inference for Additive and Mixed Models. Computational Statistics and Data Analysis, 167, 107350.
    link|pdf
  33. Ott, F., Rügamer, D., Heublein, L., Bischl, B., & Mutschler, C. (2022). Cross-Modal Common Representation Learning with Triplet Loss Functions. ArXiv Preprint ArXiv:2202.07901.
    link|pdf
  34. Rügamer, D., Baumann, P. F. M., Kneib, T., & Hothorn, T. (2022). Probabilistic Time Series Forecasts with Autoregressive Transformation Models. ArXiv:2110.08248 [Cs, Stat].
    link|pdf
  35. Dorigatti, E., Goschenhofer, J., Schubert, B., Rezaei, M., & Bischl, B. (2022). Positive-Unlabeled Learning with Uncertainty-aware Pseudo-label Selection. ArXiv Preprint ArXiv:2109.05232.
    link|pdf
  36. Kopper, P., Wiegrebe, S., Bischl, B., Bender, A., & Rügamer, D. (2022). DeepPAMM: Deep Piecewise Exponential Additive Mixed Models for Complex Hazard Structures in Survival Analysis. Advances in Knowledge Discovery and Data Mining, 249–261.
    link|pdf
  37. Hartl, W. H., Kopper, P., Bender, A., Scheipl, F., Day, A. G., Elke, G., & Küchenhoff, H. (2022). Protein intake and outcome of critically ill patients: analysis of a large international database using piece-wise exponential additive mixed models. Critical Care, 26, 7.
    link|pdf
  38. Rezaei, M., Dorigatti, E., Rugamer, D., & and Bernd Bischl. (2022). Joint Debiased Representation Learning and Imbalanced Data Clustering. IEEE International Conference on Data Mining (ICDM) Deep Learning and Clustering (DLC) Workshop.
  39. Li*, Y., Khakzar*, A., Zhang, Y., Sanisoglu, M., Kim, S. T., Rezaei, M., Bischl, B., & Navab, N. (2022). Analyzing the Effects of Handling Data Imbalance on Learned Features from Medical Images by Looking Into the Models. 2nd Workshop on Interpretable Machine Learning in Healthcare (IMLH 2022) at the the 39th International Conference on Machine Learning (ICML 2022).
  40. Lebmeier, E., Aßenmacher, M., & Heumann, C. (2022, September). On the current state of reproducibility and reporting of uncertainty for Aspect-based Sentiment Analysis. Machine Learning and Knowledge Discovery in Databases (ECML-PKDD).
    pdf
  41. Aßenmacher, M., Dietrich, M., Elmaklizi, A., Hemauer, E. M., & Wagenknecht, N. (2022). Whitepaper: New Tools for Old Problems. https://doi.org/10.5281/zenodo.6606451
  42. Scholbeck, C. A., Funk, H., & Casalicchio, G. (2022). Algorithm-Agnostic Interpretations for Clustering.
    link
  43. Schneider, L., Schäpermeier, L., Prager, R. P., Bischl, B., Trautmann, H., & Kerschke, P. (2022). HPO X ELA: Investigating Hyperparameter Optimization Landscapes by Means of Exploratory Landscape Analysis. Parallel Problem Solving from Nature – PPSN XVII, 575–589.
    link | pdf
  44. Gijsbers, P., Bueno, M. L. P., Coors, S., LeDell, E., Poirier, S., Thomas, J., Bischl, B., & Vanschoren, J. (2022). AMLB: an AutoML Benchmark. ArXiv Preprint ArXiv:2207.12560.
    link | pdf
  45. Böhme, R., Coors, S., Oster, P., Munser-Kiefer, M., & Hilbert, S. (2022). Machine learning for spelling acquisition - How accurate is the prediction of specific spelling errors in German primary school students? PsyArXiv. https://doi.org/10.31234/osf.io/shguf
  46. Karl, F., Pielok, T., Moosbauer, J., Pfisterer, F., Coors, S., Binder, M., Schneider, L., Thomas, J., Richter, J., Lang, M., & others. (2022). Multi-Objective Hyperparameter Optimization – An Overview. ArXiv Preprint ArXiv:2206.07438.
    link | pdf
  47. Schneider, L., Pfisterer, F., Thomas, J., & Bischl, B. (2022). A Collection of Quality Diversity Optimization Problems Derived from Hyperparameter Optimization of Machine Learning Models. Proceedings of the Genetic and Evolutionary Computation Conference Companion, 2136–2142.
    link | pdf
  48. Pargent, F., Pfisterer, F., Thomas, J., & Bischl, B. (2022). Regularized target encoding outperforms traditional methods in supervised machine learning with high cardinality features. Computational Statistics, 1–22.
    link | pdf
  49. Schneider, L., Pfisterer, F., Kent, P., Branke, J., Bischl, B., & Thomas, J. (2022). Tackling Neural Architecture Search With Quality Diversity Optimization. International Conference on Automated Machine Learning, 9–1.
    link | pdf
  50. Koch, P., Aßenmacher, M., & Heumann, C. (2022). Pre-trained language models evaluating themselves - A comparative study. Proceedings of the Third Workshop on Insights from Negative Results in NLP, 180–187.
    link|pdf
  51. Herbinger, J., Bischl, B., & Casalicchio, G. (2022). REPID: Regional Effect Plots with implicit Interaction Detection. International Conference on Artificial Intelligence and Statistics (AISTATS) , 25.
    link | pdf
  52. Scholbeck, C. A., Casalicchio, G., Molnar, C., Bischl, B., & Heumann, C. (2022). Marginal Effects for Non-Linear Prediction Functions.
    link
  53. Moosbauer, J., Binder, M., Schneider, L., Pfisterer, F., Becker, M., Lang, M., Kotthoff, L., & Bischl, B. (2022). Automated Benchmark-Driven Design and Explanation of Hyperparameter Optimizers. IEEE Transactions on Evolutionary Computation, 26(6), 1336–1350.
    link | pdf
  54. Pfisterer, F., Schneider, L., Moosbauer, J., Binder, M., & Bischl, B. (2022). Yahpo Gym – An Efficient Multi-Objective Multi-Fidelity Benchmark for Hyperparameter Optimization. International Conference on Automated Machine Learning, 3–1.
    link | pdf
  55. Molnar, C., König, G., Herbinger, J., Freiesleben, T., Dandl, S., Scholbeck, C. A., Casalicchio, G., Grosse-Wentrup, M., & Bischl, B. (2022). General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models. In xxAI - Beyond Explainable AI (pp. 39–68). Springer International Publishing.
    link | pdf
  56. Au, Q., Herbinger, J., Stachl, C., Bischl, B., & Casalicchio, G. (2022). Grouped feature importance and combined features effect plot. Data Mining and Knowledge Discovery, 36(4), 1401–1450.
  57. Ghada, W., Casellas, E., Herbinger, J., Garcia-Benadı́ Albert, Bothmann, L., Estrella, N., Bech, J., & Menzel, A. (2022). Stratiform and Convective Rain Classification Using Machine Learning Models and Micro Rain Radar. Remote Sensing, 14(18), 4563.
  58. Turkoglu, M. O., Becker, A., Gündüz, H. A., Rezaei, M., Bischl, B., Daudt, R. C., D’Aronco, S., Wegner, J. D., & Schindler, K. (2022). FiLM-Ensemble: Probabilistic Deep Learning via Feature-wise Linear Modulation. Advances in Neural Information Processing Systems (NeurIPS 2022).
    link | pdf
  59. Hurmer, N., To, X.-Y., Binder, M., Gündüz, H. A., Münch, P. C., Mreches, R., McHardy, A. C., Bischl, B., & Rezaei, M. (2022). Transformer Model for Genome Sequence Analysis. LMRL Workshop - NeurIPS 2022.
    link | pdf
  60. Freiesleben, T., König, G., Molnar, C., & Tejero-Cantero, A. (2022). Scientific inference with interpretable machine learning: Analyzing models to learn about real-world phenomena. ArXiv Preprint ArXiv:2206.05487.
  61. Goschenhofer, J., Ragupathy, P., Heumann, C., Bischl, B., & Aßenmacher, M. (2022). CC-Top: Constrained Clustering for Dynamic Topic Discovery. Workshop on Ever Evolving NLP (EvoNLP).
    link|pdf
  62. Dexl, J., Benz, M., Kuritcyn, P., Wittenberg, T., Bruns, V., Geppert, C., Hartmann, A., Bischl, B., & Goschenhofer, J. (2022). Robust Colon Tissue Cartography with Semi-Supervision. Current Directions in Biomedical Engineering, 8(2), 344–347.
    link|pdf
  63. Rueger, S., Goschenhofer, J., Nath, A., Firsching, M., Ennen, A., & Bischl, B. (2022). Deep-Learning-based Aluminum Sorting on Dual Energy X-Ray Transmission Data. Sensor-Based Sorting and Control. https://doi.org/10.2370/9783844085457

2021

  1. Hilbert, S., Coors, S., Kraus, E., Bischl, B., Lindl, A., Frei, M., Wild, J., Krauss, S., Goretzko, D., & Stachl, C. (2021). Machine learning for the educational sciences. Review of Education, 9(3), e3310. https://doi.org/https://doi.org/10.1002/rev3.3310
  2. Liew, B. X. W., Rügamer, D., Duffy, K., Taylor, M., & Jackson, J. (2021). The mechanical energetics of walking across the adult lifespan. PloS One, 16(11), e0259817.
    link
  3. Mittermeier, M., Weigert, M., & Rügamer, D. (2021). Identifying the atmospheric drivers of drought and heat using a smoothed deep learning approach. NeurIPS 2021, Tackling Climate Change with Machine Learning.
    link|pdf
  4. Weber, T., Ingrisch, M., Fabritius, M., Bischl, B., & Rügamer, D. (2021). Survival-oriented embeddings for improving accessibility to complex data structures. NeurIPS 2021 Workshops, Bridging the Gap: From Machine Learning Research to Clinical Practice.
    link|pdf
  5. Weber, T., Ingrisch, M., Bischl, B., & Rügamer, D. (2021). Towards modelling hazard factors in unstructured data spaces using gradient-based latent interpolation. NeurIPS 2021 Workshops, Deep Generative Models and Downstream Applications.
    link|pdf
  6. Liew, B. X. W., Rügamer, D., Zhai, X. J., Morris, S., & Netto, K. (2021). Comparing machine, deep, and transfer learning in predicting joint moments in running. Journal of Biomechanics.
  7. Ott, F., Rügamer, D., Heublein, L., Bischl, B., & Mutschler, C. (2021, October 3). Joint Classification and Trajectory Regression of Online Handwriting using a Multi-Task Learning Approach. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV).
  8. Goschenhofer, J., Hvingelby, R., Rügamer, D., Thomas, J., Wagner, M., & Bischl, B. (2021, September 18). Deep Semi-Supervised Learning for Time Series Classification. 20th IEEE International Conference on Machine Learning and Applications (ICMLA).
    link | pdf
  9. Python, A., Bender, A., Blangiardo, M., Illian, J. B., Lin, Y., Liu, B., Lucas, T. C. D., Tan, S., Wen, Y., Svanidze, D., & Yin, J. (2021). A Downscaling Approach to Compare COVID-19 Count Data from Databases Aggregated at Different Spatial Scales. Journal of the Royal Statistical Society: Series A (Statistics in Society). https://doi.org/10.1111/rssa.12738
  10. Bauer, A., Klima, A., Gauß, J., Kümpel, H., Bender, A., & Küchenhoff, H. (2021). Mundus Vult Decipi, Ergo Decipiatur: Visual Communication of Uncertainty in Election Polls. PS: Political Science & Politics, 1–7. https://doi.org/10.1017/S1049096521000950
  11. Rezaei, M., Soleymani, F., Bischl, B., & Azizi, S. (2021). Deep Bregman Divergence for Contrastive Learning of Visual Representations. ArXiv Preprint ArXiv:2109.07455.
  12. Soleymani, F., Eslami, M., Elze, T., Bischl, B., & Rezaei, M. (2021). Deep Variational Clustering Framework for Self-labeling of Large-scale Medical Images. ArXiv Preprint ArXiv:2109.10777.
  13. Bischl, B., Casalicchio, G., Feurer, M., Gijsbers, P., Hutter, F., Lang, M., Mantovani, R. G., van Rijn, J. N., & Vanschoren, J. (2021). OpenML Benchmarking Suites. In J. Vanschoren & S. Yeung (Eds.), Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks (Vol. 1).
    link | pdf
  14. Fabritius, M. P., Seidensticker, M., Rueckel, J., Heinze, C., Pech, M., Paprottka, K. J., Paprottka, P. M., Topalis, J., Bender, A., Ricke, J., Mittermeier, A., & Ingrisch, M. (2021). Bi-Centric Independent Validation of Outcome Prediction after Radioembolization of Primary and Secondary Liver Cancer. Journal of Clinical Medicine, 10(16), 3668. https://doi.org/10.3390/jcm10163668
  15. Pfisterer, F., Kern, C., Dandl, S., Sun, M., Kim, M. P., & Bischl, B. (2021). mcboost: Multi-Calibration Boosting for R. Journal of Open Source Software, 6(64), 3453. https://doi.org/10.21105/joss.03453
  16. Bothmann, L., Strickroth, S., Casalicchio, G., Rügamer, D., Lindauer, M., Scheipl, F., & Bischl, B. (2021). Developing Open Source Educational Resources for Machine Learning and Data Science. ArXiv:2107.14330 [Cs, Stat].
    link
  17. Falla, D., Devecchi, V., Jimenez-Grande, D., Rügamer, D., & Liew, B. (2021). Modern Machine Learning Approaches Applied in Spinal Pain Research. In Journal of Electromyography and Kinesiology.
  18. *Coors, S., *Schalk, D., Bischl, B., & Rügamer, D. (2021). Automatic Componentwise Boosting: An Interpretable AutoML System. ECML-PKDD Workshop on Automating Data Science.
    link | pdf
  19. Berninger, C., Stöcker, A., & Rügamer, D. (2021). A Bayesian Time-Varying Autoregressive Model for Improved Short- and Long-Term Prediction. Journal of Forecasting.
    link|pdf
  20. Python, A., Bender, A., Nandi, A. K., Hancock, P. A., Arambepola, R., Brandsch, J., & Lucas, T. C. D. (2021). Predicting non-state terrorism worldwide. Science Advances, 7(31), eabg4778. https://doi.org/10.1126/sciadv.abg4778
  21. Baumann, P. F. M., Hothorn, T., & Rügamer, D. (2021). Deep Conditional Transformation Models. Machine Learning and Knowledge Discovery in Databases. Research Track, 3–18.
    link|pdf
  22. König, G., Freiesleben, T., Bischl, B., Casalicchio, G., & Grosse-Wentrup, M. (2021). Decomposition of Global Feature Importance into Direct and Associative Components (DEDACT).
    link
  23. Ramjith, J., Bender, A., Roes, K. C. B., & Jonker, M. A. (2021). Recurrent Events Analysis with Piece-wise exponential Additive Mixed Models. Research Square. https://doi.org/10.21203/rs.3.rs-563303/v1
  24. Pfisterer, F., van Rijn, J. N., Probst, P., Müller, A., & Bischl, B. (2021). Learning Multiple Defaults for Machine Learning Algorithms. 2021 Genetic and Evolutionary Computation Conference Companion (GECCO ’21 Companion). https://doi.org/10.1145/3449726.3459532
  25. Gijsbers, P., Pfisterer, F., van Rijn, J. N., Bischl, B., & Vanschoren, J. (2021). Meta-Learning for Symbolic Hyperparameter Defaults. In 2021 Genetic and Evolutionary Computation Conference Companion (GECCO ’21 Companion). ACM. https://doi.org/10.1145/3449726.3459532
  26. Rath, K., Albert, C. G., Bischl, B., & von Toussaint, U. (2021). Symplectic Gaussian process regression of maps in Hamiltonian systems. Chaos: An Interdisciplinary Journal of Nonlinear Science, 31(5), 053121. https://doi.org/10.1063/5.0048129
  27. Kopper, P., Pölsterl, S., Wachinger, C., Bischl, B., Bender, A., & Rügamer, D. (2021). Semi-Structured Deep Piecewise Exponential Models. In R. Greiner, N. Kumar, T. A. Gerds, & M. van der Schaar (Eds.), Proceedings of AAAI Spring Symposium on Survival Prediction - Algorithms, Challenges, and Applications 2021 (Vol. 146, pp. 40–53). PMLR.
    link|pdf
  28. König, G., Molnar, C., Bischl, B., & Grosse-Wentrup, M. (2021). Relative Feature Importance. 2020 25th International Conference on Pattern Recognition (ICPR), 9318–9325.
    link | pdf
  29. Gerostathopoulos, I., Plášil, F., Prehofer, C., Thomas, J., & Bischl, B. (2021). Automated Online Experiment-Driven Adaptation–Mechanics and Cost Aspects. IEEE Access, 9, 58079–58087.
    link | pdf
  30. Liew, B., Lee, H. Y., Rügamer, D., Nunzio, A. M. D., Heneghan, N. R., Falla, D., & Evans, D. W. (2021). A novel metric of reliability in pressure pain threshold measurement. Scientific Reports (Nature).
  31. Küchenhoff, H., Günther, F., Höhle, M., & Bender, A. (2021). Analysis of the early COVID-19 epidemic curve in Germany by regression models with change points. Epidemiology & Infection, 1–17. https://doi.org/10.1017/S0950268821000558
  32. Bender, A., Rügamer, D., Scheipl, F., & Bischl, B. (2021). A General Machine Learning Framework for Survival Analysis. In F. Hutter, K. Kersting, J. Lijffijt, & I. Valera (Eds.), Machine Learning and Knowledge Discovery in Databases (pp. 158–173). Springer International Publishing. https://doi.org/10.1007/978-3-030-67664-3_10
  33. Sonabend, R., Király, F. J., Bender, A., Bischl, B., & Lang, M. (2021). mlr3proba: An R Package for Machine Learning in Survival Analysis. Bioinformatics, btab039. https://doi.org/10.1093/bioinformatics/btab039
  34. Agrawal, A., Pfisterer, F., Bischl, B., Chen, J., Sood, S., Shah, S., Buet-Golfouse, F., Mateen, B. A., & Vollmer, S. J. (2021). Debiasing classifiers: is reality at variance with expectation? Available at SSRN 3711681.
    link
  35. Kaminwar, S. R., Goschenhofer, J., Thomas, J., Thon, I., & Bischl, B. (2021). Structured Verification of Machine Learning Models in Industrial Settings. Big Data.
    link
  36. Molnar, C., Freiesleben, T., König, G., Casalicchio, G., Wright, M. N., & Bischl, B. (2021). Relating the Partial Dependence Plot and Permutation Feature Importance to the Data Generating Process. ArXiv Preprint ArXiv:2109.01433.
    link | pdf
  37. Moosbauer, J., Herbinger, J., Casalicchio, G., Lindauer, M., & Bischl, B. (2021). Explaining Hyperparameter Optimization via Partial Dependence Plots. Advances in Neural Information Processing Systems (NeurIPS 2021), 34.
    link | pdf
  38. Moosbauer, J., Herbinger, J., Casalicchio, G., Lindauer, M., & Bischl, B. (2021). Towards Explaining Hyperparameter Optimization via Partial Dependence Plots. 8th ICML Workshop on Automated Machine Learning (AutoML).
    link | pdf
  39. Binder, M., Pfisterer, F., Lang, M., Schneider, L., Kotthoff, L., & Bischl, B. (2021). mlr3pipelines - Flexible Machine Learning Pipelines in R. Journal of Machine Learning Research, 22(184), 1–7.
    link | pdf
  40. Schneider, L., Pfisterer, F., Binder, M., & Bischl, B. (2021). Mutation is All You Need. 8th ICML Workshop on Automated Machine Learning.
    pdf
  41. Becker, M., Binder, M., Bischl, B., Lang, M., Pfisterer, F., Reich, N. G., Richter, J., Schratz, P., & Sonabend, R. (2021). mlr3 book.
    link
  42. Gündüz, H. A., Binder, M., To, X.-Y., Mreches, R., Münch, P. C., McHardy, A. C., Bischl, B., & Rezaei, M. (2021). Self-GenomeNet: Self-supervised Learning with Reverse-Complement Context Prediction for Nucleotide-level Genomics Data.
    link | pdf

2020

  1. Günther, F., Bender, A., Katz, K., Küchenhoff, H., & Höhle, M. (2020). Nowcasting the COVID-19 pandemic in Bavaria. Biometrical Journal.
    link|pdf
  2. Liew, B. X. W., Peolsson, A., Rügamer, D., Wibault, J., Löfgren, H., Dedering, A., Zsigmond, P., & Falla, D. (2020). Clinical predictive modelling of post-surgical recovery in individuals with cervical radiculopathy – a machine learning approach. Scientific Reports.
    link
  3. Rügamer, D., Pfisterer, F., & Bischl, B. (2020). Neural Mixture Distributional Regression. ArXiv:2010.06889 [Cs, Stat].
    link|pdf
  4. Guenther, F., Bender, A., Höhle, M., Wildner, M., & Küchenhoff, H. (2020). Analysis of the COVID-19 pandemic in Bavaria: adjusting for misclassification. MedRxiv, 2020.09.29.20203877. https://doi.org/10.1101/2020.09.29.20203877
  5. Dandl, S., Molnar, C., Binder, M., & Bischl, B. (2020). Multi-Objective Counterfactual Explanations. In T. Bäck, M. Preuss, A. Deutz, H. Wang, C. Doerr, M. Emmerich, & H. Trautmann (Eds.), Parallel Problem Solving from Nature – PPSN XVI (pp. 448–469). Springer International Publishing.
    link
  6. Schratz, P., Muenchow, J., Iturritxa, E., Cortés, J., Bischl, B., & Brenning, A. (2020). Monitoring forest health using hyperspectral imagery: Does feature selection improve the performance of machine-learning techniques?
    link
  7. Bender, A., Python, A., Lindsay, S. W., Golding, N., & Moyes, C. L. (2020). Modelling geospatial distributions of the triatomine vectors of Trypanosoma cruzi in Latin America. PLOS Neglected Tropical Diseases, 14(8), e0008411. https://doi.org/10.1371/journal.pntd.0008411
  8. Binder, M., Pfisterer, F., & Bischl, B. (2020, July 18). Collecting Empirical Data About Hyperparameters for Data Driven AutoML. AutoML Workshop at ICML 2020.
    pdf
  9. Binder, M., Moosbauer, J., Thomas, J., & Bischl, B. (2020). Multi-Objective Hyperparameter Tuning and Feature Selection Using Filter Ensembles. Proceedings of the 2020 Genetic and Evolutionary Computation Conference, 471–479. https://doi.org/10.1145/3377930.3389815
  10. Beggel, L., Pfeiffer, M., & Bischl, B. (2020). Robust Anomaly Detection in Images using Adversarial Autoencoders. Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 206–222.
    link | pdf
  11. Dorigatti, E., & Schubert, B. (2020). Joint epitope selection and spacer design for string-of-beads vaccines. BioRxiv. https://doi.org/10.1101/2020.04.25.060988
  12. Pfister, F. M. J., Um, T. T., Pichler, D. C., Goschenhofer, J., Abedinpour, K., Lang, M., Endo, S., Ceballos-Baumann, A. O., Hirche, S., Bischl, B., & others. (2020). High-Resolution Motor State Detection in parkinson’s Disease Using convolutional neural networks. Scientific Reports, 10(1), 1–11.
    link
  13. Scholbeck, C. A., Molnar, C., Heumann, C., Bischl, B., & Casalicchio, G. (2020). Sampling, Intervention, Prediction, Aggregation: A Generalized Framework for Model-Agnostic Interpretations. In P. Cellier & K. Driessens (Eds.), Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2019 (pp. 205–216). Springer International Publishing.
    link | pdf
  14. Goerigk, S., Hilbert, S., Jobst, A., Falkai, P., Bühner, M., Stachl, C., Bischl, B., Coors, S., Ehring, T., Padberg, F., & Sarubin, N. (2020). Predicting instructed simulation and dissimulation when screening for depressive symptoms. European Archives of Psychiatry and Clinical Neuroscience, 270(2), 153–168.
    link
  15. Molnar, C., König, G., Bischl, B., & Casalicchio, G. (2020). Model-agnostic Feature Importance and Effects with Dependent Features–A Conditional Subgroup Approach. ArXiv Preprint ArXiv:2006.04628.
    link | pdf
  16. Molnar, C., König, G., Herbinger, J., Freiesleben, T., Dandl, S., Scholbeck, C. A., Casalicchio, G., Grosse-Wentrup, M., & Bischl, B. (2020). Pitfalls to Avoid when Interpreting Machine Learning Models. ICML Workshop XXAI: Extending Explainable AI Beyond Deep Models and Classifiers.
    link | pdf
  17. Molnar, C., Casalicchio, G., & Bischl, B. (2020). Quantifying Model Complexity via Functional Decomposition for Better Post-hoc Interpretability. In P. Cellier & K. Driessens (Eds.), Machine Learning and Knowledge Discovery in Databases (pp. 193–204). Springer International Publishing. link | pdf
  18. Molnar, C., Casalicchio, G., & Bischl, B. (2020). Interpretable Machine Learning – A Brief History, State-of-the-Art and Challenges. In I. Koprinska, M. Kamp, A. Appice, C. Loglisci, L. Antonie, A. Zimmermann, R. Guidotti, Ö. Özgöbek, R. P. Ribeiro, R. Gavaldà, J. Gama, L. Adilova, Y. Krishnamurthy, P. M. Ferreira, D. Malerba, I. Medeiros, M. Ceci, G. Manco, E. Masciari, … J. A. Gulla (Eds.), ECML PKDD 2020 Workshops (pp. 417–431). Springer International Publishing.
    link | pdf
  19. Stachl, C., Au, Q., Schoedel, R., Gosling, S. D., Harari, G. M., Buschek, D., Völkel, S. T., Schuwerk, T., Oldemeier, M., Ullmann, T., & others. (2020). Predicting personality from patterns of behavior collected with smartphones. Proceedings of the National Academy of Sciences.
    link | pdf
  20. Bommert, A., Sun, X., Bischl, B., Rahnenführer, J., & Lang, M. (2020). Benchmark for filter methods for feature selection in high-dimensional classification data. Computational Statistics & Data Analysis, 143, 106839.
    link | pdf
  21. Sun, X., Bommert, A., Pfisterer, F., Rähenfürher, J., Lang, M., & Bischl, B. (2020). High Dimensional Restrictive Federated Model Selection with Multi-objective Bayesian Optimization over Shifted Distributions. In Y. Bi, R. Bhatia, & S. Kapoor (Eds.), Intelligent Systems and Applications (pp. 629–647). Springer International Publishing. https://doi.org/10.1007/978-3-030-29516-5_48
  22. Rügamer, D., & Greven, S. (2020). Inference for L2-Boosting. Statistics and Computing, 30, 279–289.
    link|pdf
  23. Liew, B. X. W., Rügamer, D., Stöcker, A., & De Nunzio, A. M. (2020). Classifying neck pain status using scalar and functional biomechanical variables – development of a method using functional data boosting. Gait & Posture, 75, 146–150.
    link
  24. Liew, B., Rügamer, D., De Nunzio, A., & Falla, D. (2020). Interpretable machine learning models for classifying low back pain status using functional physiological variables. European Spine Journal, 29, 1845–1859.
    link
  25. Liew, B. X. W., Rügamer, D., Abichandani, D., & De Nunzio, A. M. (2020). Classifying individuals with and without patellofemoral pain syndrome using ground force profiles – Development of a method using functional data boosting. Gait & Posture, 80, 90–95.
    link
  26. Ellenbach, N., Boulesteix, A.-L., Bischl, B., Unger, K., & Hornung, R. (2020). Improved Outcome Prediction Across Data Sources Through Robust Parameter Tuning. Journal of Classification, 1–20.
    link|pdf
  27. Brockhaus, S., Rügamer, D., & Greven, S. (2020). Boosting Functional Regression Models with FDboost. Journal of Statistical Software, 94(10), 1–50.
  28. Dorigatti, E., & Schubert, B. (2020). Graph-theoretical formulation of the generalized epitope-based vaccine design problem. PLOS Computational Biology, 16(10), e1008237. https://doi.org/10.1371/journal.pcbi.1008237

2019

  1. Sun, X., & Bischl, B. (2019, December 6). Tutorial and Survey on Probabilistic Graphical Model and Variational Inference in Deep Reinforcement Learning. 2019 IEEE Symposium Series on Computational Intelligence (SSCI).
    link|pdf
  2. Pfisterer, F., Thomas, J., & Bischl, B. (2019). Towards Human Centered AutoML. In arXiv preprint arXiv:1911.02391.
    link | pdf
  3. Pfisterer, F., Beggel, L., Sun, X., Scheipl, F., & Bischl, B. (2019). Benchmarking time series classification – Functional data vs machine learning approaches. In arXiv preprint arXiv:1911.07511.
    link | pdf
  4. Schmid, M., Bischl, B., & Kestler, H. A. (2019). Proceedings of Reisensburg 2016–2017. Springer.
    link
  5. Beggel, L., Kausler, B. X., Schiegg, M., Pfeiffer, M., & Bischl, B. (2019). Time series anomaly detection based on shapelet learning. Computational Statistics, 34(3), 945–976.
    link | pdf
  6. Pfisterer, F., Coors, S., Thomas, J., & Bischl, B. (2019). Multi-Objective Automatic Machine Learning with AutoxgboostMC. In arXiv preprint arXiv:1908.10796.
    link | pdf
  7. Sun, X., Lin, J., & Bischl, B. (2019). ReinBo: Machine Learning pipeline search and configuration with Bayesian Optimization embedded Reinforcement Learning. CoRR, abs/1904.05381.
    link | pdf
  8. Au, Q., Schalk, D., Casalicchio, G., Schoedel, R., Stachl, C., & Bischl, B. (2019). Component-Wise Boosting of Targets for Multi-Output Prediction. ArXiv Preprint ArXiv:1904.03943.
    link | pdf
  9. Probst, P., Boulesteix, A.-L., & Bischl, B. (2019). Tunability: Importance of Hyperparameters of Machine Learning Algorithms. Journal of Machine Learning Research, 20(53), 1–32.
    link | pdf
  10. Casalicchio, G., Molnar, C., & Bischl, B. (2019). Visualizing the Feature Importance for Black Box Models. In M. Berlingerio, F. Bonchi, T. Gärtner, N. Hurley, & G. Ifrim (Eds.), Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2018 (pp. 655–670). Springer International Publishing.
    link | pdf
  11. Lang, M., Binder, M., Richter, J., Schratz, P., Pfisterer, F., Coors, S., Au, Q., Casalicchio, G., Kotthoff, L., & Bischl, B. (2019). mlr3: A modern object-oriented machine learning framework in R. Journal of Open Source Software, 4(44), 1903.
    link | pdf
  12. Völkel, S. T., Schödel, R., Buschek, D., Stachl, C., Au, Q., Bischl, B., Bühner, M., & Hussmann, H. (2019). Opportunities and challenges of utilizing personality traits for personalization in HCI. Personalized Human-Computer Interaction, 31–65.
    link
  13. Gijsbers, P., LeDell, E., Thomas, J., Poirier, S., Bischl, B., & Vanschoren, J. (2019). An Open Source AutoML Benchmark. CoRR, abs/1907.00909.
    link | pdf
  14. Sun, X., Wang, Y., Gossmann, A., & Bischl, B. (2019). Resampling-based Assessment of Robustness to Distribution Shift for Deep Neural Networks. CoRR, abs/1906.02972.
    link | pdf
  15. Pfister, F. M. J., von Schumann, A., Bemetz, J., Thomas, J., Ceballos-Baumann, A., Bischl, B., & Fietzek, U. (2019). Recognition of subjects with early-stage Parkinson from free-living unilateral wrist-sensor data using a hierarchical machine learning model. JOURNAL OF NEURAL TRANSMISSION, 126(5), 663–663.
  16. Schüller, N., Boulesteix, A.-L., Bischl, B., Unger, K., & Hornung, R. (2019). Improved outcome prediction across data sources through robust parameter tuning (Vol. 221).
    link | pdf
  17. Stachl, C., Au, Q., Schoedel, R., Buschek, D., Völkel, S., Schuwerk, T., Oldemeier, M., Ullmann, T., Hussmann, H., Bischl, B., & Bühner, M. (2019). Behavioral Patterns in Smartphone Usage Predict Big Five Personality Traits. https://doi.org/10.31234/osf.io/ks4vd
  18. Schuwerk, T., Kaltefleiter, L. J., Au, J.-Q., Hoesl, A., & Stachl, C. (2019). Enter the Wild: Autistic Traits and Their Relationship to Mentalizing and Social Interaction in Everyday Life. Journal of Autism and Developmental Disorders. https://doi.org/10.1007/s10803-019-04134-6
  19. König, G., & Grosse-Wentrup, M. (2019). A Causal Perspective on Challenges for AI in Precision Medicine.
    link
  20. Goschenhofer, J., Pfister, F. M. J., Yuksel, K. A., Bischl, B., Fietzek, U., & Thomas, J. (2019). Wearable-based Parkinson’s Disease Severity Monitoring using Deep Learning. Machine Learning and Knowledge Discovery in Databases, ECML PKDD 2019, 400–415.
    link | pdf
  21. Sun, X., Gossmann, A., Wang, Y., & Bischt, B. (2019). Variational Resampling Based Assessment of Deep Neural Networks under Distribution Shift. 2019 IEEE Symposium Series on Computational Intelligence (SSCI), 1344–1353.
    link|pdf

2018

  1. van Rijn, J. N., Pfisterer, F., Thomas, J., Bischl, B., & Vanschoren, J. (2018, December 8). Meta Learning for Defaults–Symbolic Defaults. NeurIPS 2018 Workshop on Meta Learning.
    link | pdf
  2. Arenas, D., Barp, E., Bohner, G., Churvay, V., Kiraly, F., Lienart, T., Vollmer, S., Innes, M., & Bischl, B. (2018). Workshop contribution MLJ.
    pdf
  3. Molnar, C., Casalicchio, G., & Bischl, B. (2018). iml: An R package for Interpretable Machine Learning. The Journal of Open Source Software, 3, 786. link | pdf
  4. Kestler, H. A., Bischl, B., & Schmid, M. (2018). Proceedings of Reisensburg 2014–2015. Springer.
    link
  5. Bender, A., & Scheipl, F. (2018). pammtools: Piece-wise exponential Additive Mixed Modeling tools. ArXiv:1806.01042 [Stat].
    link| pdf
  6. Fossati, M., Dorigatti, E., & Giuliano, C. (2018). N-ary relation extraction for simultaneous T-Box and A-Box knowledge base augmentation. Semantic Web, 9(4), 413–439. https://doi.org/10.3233/SW-170269
  7. Thomas, J., Mayr, A., Bischl, B., Schmid, M., Smith, A., & Hofner, B. (2018). Gradient boosting for distributional regression: faster tuning and improved variable selection via noncyclical updates. Statistics and Computing, 28(3), 673–687.
    link | pdf
  8. Kühn, D., Probst, P., Thomas, J., & Bischl, B. (2018). Automatic Exploration of Machine Learning Experiments on OpenML. ArXiv Preprint ArXiv:1806.10961.
    link | pdf
  9. Thomas, J., Coors, S., & Bischl, B. (2018). Automatic Gradient Boosting. ICML AutoML Workshop.
    link | pdf
  10. Schalk, D., Thomas, J., & Bischl, B. (2018). compboost: Modular Framework for Component-Wise Boosting. JOSS, 3(30), 967.
    link | pdf
  11. Horn, D., Demircioğlu, A., Bischl, B., Glasmachers, T., & Weihs, C. (2018). A Comparative Study on Large Scale Kernelized Support Vector Machines. Advances in Data Analysis and Classification, 1–17. https://doi.org/10.1007/s11634-016-0265-7
  12. Schoedel, R., Au, Q., Völkel, S. T., Lehmann, F., Becker, D., Bühner, M., Bischl, B., Hussmann, H., & Stachl, C. (2018). Digital Footprints of Sensation Seeking. Zeitschrift Für Psychologie, 226(4), 232–245. https://doi.org/10.1027/2151-2604/a000342
  13. Völkel, S. T., Graefe, J., Schödel, R., Häuslschmid, R., Stachl, C., Au, Q., & Hussmann, H. (2018). I Drive My Car and My States Drive Me. Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications - AutomotiveUI ’18, 198–203. https://doi.org/10.1145/3239092.3267102
  14. Rügamer, D., & Greven, S. (2018). Selective inference after likelihood-or test-based model selection in linear models. Statistics & Probability Letters, 140, 7–12.
  15. Burdukiewicz, M., Karas, M., Jessen, L. E., Kosinski, M., Bischl, B., & Rödiger, S. (2018). Conference Report: Why R? 2018. The R Journal, 10(2), 572–578.
    pdf

2017

  1. Stachl, C., Hilbert, S., Au, Q., Buschek, D., De Luca, A., Bischl, B., Hussmann, H., & Bühner, M. (2017). Personality Traits Predict Smartphone Usage. European Journal of Personality, 31(6), 701–722. https://doi.org/10.1002/per.2113
  2. Cáceres, L. P., Bischl, B., & Stützle, T. (2017). Evaluating Random Forest Models for Irace. Proceedings of the Genetic and Evolutionary Computation Conference Companion, 1146–1153.
    link|pdf
  3. Casalicchio, G., Lesaffre, E., Küchenhoff, H., & Bruyneel, L. (2017). Nonlinear Analysis to Detect if Excellent Nursing Work Environments Have Highest Well-Being. Journal of Nursing Scholarship, 49(5), 537–547.
    link | pdf
  4. Bischl, B., Richter, J., Bossek, J., Horn, D., Thomas, J., & Lang, M. (2017). mlrMBO: A Modular Framework for Model-Based Optimization of Expensive Black-Box Functions. ArXiv Preprint ArXiv:1703.03373.
    link | pdf
  5. Horn, D., Dagge, M., Sun, X., & Bischl, B. (2017). First Investigations on Noisy Model-Based Multi-objective Optimization. In Evolutionary Multi-Criterion Optimization: 9th International Conference, EMO 2017, Münster, Germany, March 19-22, 2017, Proceedings (pp. 298–313). Springer International Publishing. https://doi.org/10.1007/978-3-319-54157-0_21
  6. Beggel, L., Sun, X., & Bischl, B. (2017). mlrFDA: an R toolbox for functional data analysis. Ulmer Informatik-Berichte, 15.
    pdf
  7. Horn, D., Bischl, B., Demircioglu, A., Glasmachers, T., Wagner, T., & Weihs, C. (2017). Multi-objective selection of algorithm portfolios. Archives of Data Science.
    link
  8. Probst, P., Au, Q., Casalicchio, G., Stachl, C., & Bischl, B. (2017). Multilabel Classification with R Package mlr. The R Journal, 9(1), 352–369.
    link | pdf
  9. Casalicchio, G., Bossek, J., Lang, M., Kirchhoff, D., Kerschke, P., Hofner, B., Seibold, H., Vanschoren, J., & Bischl, B. (2017). OpenML: An R package to connect to the machine learning platform OpenML. Computational Statistics, 977–991.
    link | pdf
  10. Kotthaus, H., Richter, J., Lang, A., Thomas, J., Bischl, B., Marwedel, P., Rahnenführer, J., & Lang, M. (2017). RAMBO: Resource-Aware Model-Based Optimization with Scheduling for Heterogeneous Runtimes and a Comparison with Asynchronous Model-Based Optimization. International Conference on Learning and Intelligent Optimization, 180–195.
    link | pdf
  11. Thomas, J., Hepp, T., Mayr, A., & Bischl, B. (2017). Probing for sparse and fast variable selection with model-based boosting. Computational and Mathematical Methods in Medicine, 2017.
    link | pdf
  12. Lang, M., Bischl, B., & Surmann, D. (2017). batchtools: Tools for R to work on batch systems. The Journal of Open Source Software, 2(10).
    link

2016

  1. Richter, J., Kotthaus, H., Bischl, B., Marwedel, P., Rahnenführer, J., & Lang, M. (2019, May 29). Faster Model-Based Optimization through Resource-Aware Scheduling Strategies. Proceedings of the 10th Learning and Intelligent OptimizatioN Conference (LION 10).
    link|pdf
  2. Horn, D., & Bischl, B. (2016). Multi-objective Parameter Configuration of Machine Learning Algorithms using Model-Based Optimization. 2016 IEEE Symposium Series on Computational Intelligence (SSCI), 1–8.
    link|pdf
  3. Bischl, B., Lang, M., Kotthoff, L., Schiffner, J., Richter, J., Studerus, E., Casalicchio, G., & Jones, Z. M. (2016). mlr: Machine Learning in R. The Journal of Machine Learning Research, 17(1), 5938–5942.
    link | pdf
  4. Bauer, N., Friedrichs, K., Bischl, B., & Weihs, C. (2016, August 4). Fast Model Based Optimization of Tone Onset Detection by Instance Sampling. Data Analysis, Machine Learning and Knowledge Discovery.
    link
  5. Weihs, C., Horn, D., & Bischl, B. (2016). Big data Classification: Aspects on Many Features and Many Observations. In A. F. X. Wilhelm & H. A. Kestler (Eds.), Analysis of Large and Complex Data (pp. 113–122). Springer International Publishing. https://doi.org/10.1007/978-3-319-25226-1_10
  6. Bischl, B., Kerschke, P., Kotthoff, L., Lindauer, M., Malitsky, Y., Frechétte, A., Hoos, H., Hutter, F., Leyton-Brown, K., Tierney, K., & Vanschoren, J. (2016). ASlib: A Benchmark Library for Algorithm Selection. Artificial Intelligence, 237, 41–58.
    link
  7. Bischl, B., Kühn, T., & Szepannek, G. (2016). On Class Imbalance Correction for Classification Algorithms in Credit Scoring. In Operations Research Proceedings 2014 (pp. 37–43). Springer International Publishing.
    link|pdf
  8. Demircioglu, A., Horn, D., Glasmachers, T., Bischl, B., & Weihs, C. (2016). Fast model selection by limiting SVM training times (Number arxiv:1302.1602.03368v1). arxiv.org.
    link
  9. Casalicchio, G., Bischl, B., Boulesteix, A.-L., & Schmid, M. (2015). The residual-based predictiveness curve: A visual tool to assess the performance of prediction models. Biometrics, 72(2), 392–401.
    link | pdf
  10. Bischl, B., Lang, M., Kotthoff, L., Schiffner, J., Richter, J., Studerus, E., Casalicchio, G., & Jones, Z. M. (2016). mlr: Machine Learning in R. The Journal of Machine Learning Research, 17(1), 5938–5942.
  11. Degroote, H., Bischl, B., Kotthoff, L., & De Causmaecker, P. (2016). Reinforcement Learning for Automatic Online Algorithm Selection - an Empirical Study. ITAT 2016 Proceedings, 1649, 93–101.
    link
  12. Feilke, M., Bischl, B., Schmid, V. J., & Gertheiss, J. (2016). Boosting in nonlinear regression models with an application to DCE-MRI data. Methods of Information in Medicine, 55(01), 31–41.
  13. Beggel, L., Kausler, B. X., Schiegg, M., & Bischl, B. (2016). Anomaly Detection with Shapelet-Based Feature Learning for Time Series. Ulmer Informatik-Berichte, 25.
    link | pdf
  14. Rietzler, M., Geiselhart, F., Thomas, J., & Rukzio, E. (2016). FusionKit: a generic toolkit for skeleton, marker and rigid-body tracking. Proceedings of the 8th ACM SIGCHI Symposium on Engineering Interactive Computing Systems, 73–84.
    link
  15. Schiffner, J., Bischl, B., Lang, M., Richter, J., Jones, Z. M., Probst, P., Pfisterer, F., Gallo, M., Kirchhoff, D., Kühn, T., Thomas, J., & Kotthoff, L. (2016). mlr Tutorial.
    link | pdf
  16. Degroote, H., Bischl, B., Kotthoff, L., & Causmaecker, P. D. (2016). Reinforcement Learning for Automatic Online Algorithm Selection - an Empirical Study. Proceedings of the 16th ITAT Conference Information Technologies - Applications and Theory, Tatranské Matliare, Slovakia, September 15-19, 2016., 93–101.
    link
  17. Feilke, M., Bischl, B., Schmid, V. J., & Gertheiss, J. (2016). Boosting in non-linear regression models with an application to DCE-MRI data. Methods of Information in Medicine.
    link|pdf

2015

  1. Vanschoren, J., Rijn, J. N., & Bischl, B. (2015). Taking machine learning research online with OpenML. In W. Fan, A. Bifet, Q. Yang, & P. S. Yu (Eds.), Proceedings of the 4th International Workshop on Big Data, Streams and Heterogeneous Source Mining: Algorithms, Systems, Programming Models and Applications (Vol. 41, pp. 1–4). PMLR.
    link|pdf
  2. Mantovani, R. G., Rossi, A. L. D., Vanschoren, J., Bischl, B., & Carvalho, A. C. P. L. F. (2015). To tune or not to tune: Recommending when to adjust SVM hyper-parameters via meta-learning. 2015 International Joint Conference on Neural Networks (IJCNN), 1–8. https://doi.org/10.1109/IJCNN.2015.7280644
  3. Bossek, J., Bischl, B., Wagner, T., & Rudolph, G. (2015). Learning feature-parameter mappings for parameter tuning via the profile expected improvement. Proceedings of the 2015 Annual Conference on Genetic And Evolutionary Computation, 1319–1326.
    link|pdf
  4. Brockhoff, D., Bischl, B., & Wagner, T. (2015). The Impact of Initial Designs on the Performance of MATSuMoTo on the Noiseless BBOB-2015 Testbed: A Preliminary Study. Proceedings of the Companion Publication of the 2015 Annual Conference on Genetic and Evolutionary Computation, 1159–1166. https://doi.org/10.1145/2739482.2768470
  5. Horn, D., Wagner, T., Biermann, D., Weihs, C., & Bischl, B. (2015). Model-Based Multi-Objective Optimization: Taxonomy, Multi-Point Proposal, Toolbox and Benchmark. In A. Gaspar-Cunha, C. Henggeler Antunes, & C. C. Coello (Eds.), Evolutionary Multi-Criterion Optimization (EMO) (Vol. 9018, pp. 64–78). Springer. https://doi.org/10.1007/978-3-319-15934-8_5
  6. Casalicchio, G., Tutz, G., & Schauberger, G. (2015). Subject-specific Bradley–Terry–Luce models with implicit variable selection. Statistical Modelling, 15(6), 526–547.
    link | pdf
  7. Kotthaus, H., Korb, I., Lang, M., Bischl, B., Rahnenführer, J., & Marwedel, P. (2015). Runtime and memory consumption analyses for machine learning R programs. Journal of Statistical Computation and Simulation, 85(1), 14–29. https://doi.org/10.1080/00949655.2014.925192
  8. Lang, M., Kotthaus, H., Marwedel, P., Weihs, C., Rahnenführer, J., & Bischl, B. (2015). Automatic model selection for high-dimensional survival analysis. Journal of Statistical Computation and Simulation, 85(1), 62–76. https://doi.org/10.1080/00949655.2014.929131
  9. Bischl, B. (2015). Applying Model-Based Optimization to Hyperparameter Optimization in Machine Learning. Proceedings of the 2015 International Conference on Meta-Learning and Algorithm Selection - Volume 1455, 1.
    link|pdf
  10. Vanschoren, J., van Rijn, J. N., Bischl, B., Casalicchio, G., & Feurer, M. (2015). OpenML: A Networked Science Platform for Machine Learning. 2015 ICML Workshop on Machine Learning Open Source Software (MLOSS 2015), 1–3.
    link | pdf
  11. Bischl, B., Kühn, T., & Szepannek, G. (2015). On Class Imbalancy Correction for Classification Algorithms in Credit Scoring. In Operations Research Proceedings 2014. Springer.
    link
  12. Bischl, B., Lang, M., Mersmann, O., Rahnenführer, J., & Weihs, C. (2015). BatchJobs and BatchExperiments: Abstraction Mechanisms for Using R in Batch Environments. Journal of Statistical Software, 64(11), 1–25.
    link
  13. Mersmann, O., Preuss, M., Trautmann, H., Bischl, B., & Weihs, C. (2015). Analyzing the BBOB Results by Means of Benchmarking Concepts. Evolutionary Computation Journal, 23(1), 161–185. https://doi.org/doi:10.1162/EVCO_a_00134
  14. Vanschoren, J., Bischl, B., Hutter, F., Sebag, M., Kegl, B., Schmid, M., Napolitano, G., & Wolstencroft, K. (2015). Towards a data science collaboratory. Lecture Notes in Computer Science (IDA 2015), 9385.
    pdf

2014

  1. Bischl, B., Schiffner, J., & Weihs, C. (2014). Benchmarking Classification Algorithms on High-Performance Computing Clusters. In M. Spiliopoulou, L. Schmidt-Thieme, & R. Janning (Eds.), Data Analysis, Machine Learning and Knowledge Discovery (pp. 23–31). Springer. https://doi.org/10.1007/978-3-319-01595-8_3
  2. Bischl, B., Wessing, S., Bauer, N., Friedrichs, K., & Weihs, C. (2014). MOI-MBO: Multiobjective Infill for Parallel Model-Based Optimization. In P. M. Pardalos, M. G. C. Resende, C. Vogiatzis, & J. L. Walteros (Eds.), Learning and Intelligent Optimization (pp. 173–186). Springer. https://doi.org/10.1007/978-3-319-09584-4_17
  3. Kerschke, P., Preuss, M., Hernández, C., Schütze, O., Sun, J.-Q., Grimme, C., Rudolph, G., Bischl, B., & Trautmann, H. (2014). Cell Mapping Techniques for Exploratory Landscape Analysis. Proceedings of the EVOLVE 2014: A Bridge Between Probability, Set Oriented Numerics, and Evolutionary Computation, 115–131.
    link | pdf
  4. Meyer, O., Bischl, B., & Weihs, C. (2014). Support Vector Machines on Large Data Sets: Simple Parallel Approaches. In M. Spiliopoulou, L. Schmidt-Thieme, & R. Janning (Eds.), Data Analysis, Machine Learning and Knowledge Discovery (pp. 87–95). Springer. https://doi.org/10.1007/978-3-319-01595-8_10
  5. Vatolkin, I., Bischl, B., Rudolph, G., & Weihs, C. (2014). Statistical Comparison of Classifiers for Multi-objective Feature Selection in Instrument Recognition. In M. Spiliopoulou, L. Schmidt-Thieme, & R. Janning (Eds.), Data Analysis, Machine Learning and Knowledge Discovery (pp. 171–178). Springer. https://doi.org/10.1007/978-3-319-01595-8_19
  6. Vanschoren, J., van Rijn, J. N., Bischl, B., & Torgo, L. (2014). OpenML: Networked Science in Machine Learning. SIGKDD Explorations Newsletter, 15(2), 49–60.
    link | pdf

2013

  1. Hess, S., Wagner, T., & Bischl, B. (2013). PROGRESS: Progressive Reinforcement-Learning-Based Surrogate Selection. In G. Nicosia & P. Pardalos (Eds.), Learning and Intelligent Optimization (pp. 110–124). Springer. https://doi.org/10.1007/978-3-642-44973-4_13
  2. Mersmann, O., Bischl, B., Trautmann, H., Wagner, M., Bossek, J., & Neumann, F. (2013). A novel feature-based approach to characterize algorithm performance for the traveling salesperson problem. Annals of Mathematics and Artificial Intelligence, March, 1–32. https://doi.org/10.1007/s10472-013-9341-2
  3. van Rijn, J., Bischl, B., Torgo, L., Gao, G., Umaashankar, V., Fischer, S., Winter, P., Wiswedel, B., Berthold, M. R., & Vanschoren, J. (2013). OpenML: A Collaborative Science Platform. Machine Learning and Knowledge Discovery in Databases, 645–649. https://doi.org/10.1007/978-3-642-40994-3_46
  4. Bischl, B., Schiffner, J., & Weihs, C. (2013). Benchmarking local classification methods. Computational Statistics, 28(6), 2599–2619. https://doi.org/10.1007/s00180-013-0420-y
  5. Bergmann, S., Ziegler, N., Bartels, T., Hübel, J., Schumacher, C., Rauch, E., Brandl, S., Bender, A., Casalicchio, G., Krautwald-Junghanns, M.-E., & others. (2013). Prevalence and severity of foot pad alterations in German turkey poults during the early rearing phase. Poultry Science, 92(5), 1171–1176.
    link | pdf
  6. Nallaperuma, S., Wagner, M., Neumann, F., Bischl, B., Mersmann, O., & Trautmann, H. (2013, January 16). A Feature-Based Comparison of Local Search and the Christofides Algorithm for the Travelling Salesperson Problem. Foundations of Genetic Algorithms (FOGA). https://doi.org/10.1145/2460239.2460253
  7. Ziegler, N., Bergmann, S., Huebei, J., Bartels, T., Schumacher, C., Bender, A., Casalicchio, G., Kuechenhoff, H., Krautwald-Junghanns, M.-E., & Erhard, M. (2013). Climate parameters and the influence on the foot pad health status of fattening turkeys BUT 6 during the early rearing phase. BERLINER UND MUNCHENER TIERARZTLICHE WOCHENSCHRIFT, 126(5-6), 181–188.
    link | pdf
  8. van Rijn, J., Umaashankar, V., Fischer, S., Bischl, B., Torgo, L., Gao, B., Winter, P., Wiswedel, B., Berthold, M. R., & Vanschoren, J. (2013). A RapidMiner extension for Open Machine Learning. RapidMiner Community Meeting and Conference (RCOMM), 59–70.
    link | pdf

2012

  1. Nallaperuma, S., Wagner, M., Neumann, F., Bischl, B., Mersmann, O., & Trautmann, H. (2012). Features of Easy and Hard Instances for Approximation Algorithms and the Traveling Salesperson Problem. Citeseer.
    link | pdf
  2. Bischl, B., Mersmann, O., Trautmann, H., & Preuss, M. (2012). Algorithm Selection Based on Exploratory Landscape Analysis and Cost-Sensitive Learning. Proceedings of the 14th Annual Conference on Genetic and Evolutionary Computation, 313–320. https://doi.org/10.1145/2330163.2330209
  3. Koch, P., Bischl, B., Flasch, O., Bartz-Beielstein, T., Weihs, C., & Konen, W. (2012). Tuning and evolution of support vector kernels. Evolutionary Intelligence, 5(3), 153–170. https://doi.org/10.1007/s12065-012-0073-8
  4. Mersmann, O., Bischl, B., Bossek, J., Trautmann, H., M., W., & Neumann, F. (2012). Local Search and the Traveling Salesman Problem: A Feature-Based Characterization of Problem Hardness. Learning and Intelligent Optimization Conference (LION), 115–129. https://doi.org/10.1007/978-3-642-34413-8_9
  5. Schiffner, J., Bischl, B., & Weihs, C. (2012). Bias-variance analysis of local classification methods. In W. Gaul, A. Geyer-Schulz, L. Schmidt-Thieme, & J. Kunze (Eds.), Challenges at the Interface of Data Analysis, Computer Science, and Optimization (pp. 49–57). Springer. https://doi.org/10.1007/978-3-642-24466-7_6
  6. Weihs, C., O., M., Bischl, B., Fritsch, A., Trautmann, H., Karbach, T.-M., & Spaan, B. (2012). A Case Study on the Use of Statistical Classification Methods in Particle Physics. Challenges at the Interface of Data Analysis, Computer Science, and Optimization, 69–77.
    link
  7. Bischl, B., Mersmann, O., Trautmann, H., & Weihs, C. (2012). Resampling Methods for Meta-Model Validation with Recommendations for Evolutionary Computation. Evolutionary Computation, 20(2), 249–275. https://doi.org/10.1162/EVCO_a_00069
  8. Bischl, B., Lang, M., Mersmann, O., Rahnenfuehrer, J., & Weihs, C. (2012). Computing on high performance clusters with R: Packages BatchJobs and BatchExperiments. SFB 876, TU Dortmund University.
    link

2011

  1. Mersmann, O., Bischl, B., Trautmann, H., Preuss, M., Weihs, C., & Rudolph, G. (2011). Exploratory Landscape Analysis. In N. Krasnogor (Ed.), Proceedings of the 13th annual conference on genetic and evolutionary computation (GECCO ’11) (pp. 829–836). Association for Computing Machinery. https://doi.org/10.1145/2001576.2001690
  2. Blume, H., Bischl, B., Botteck, M., Igel, C., Martin, R., Roetter, G., Rudolph, G., Theimer, W., Vatolkin, I., & Weihs, C. (2011). Huge Music Archives on Mobile Devices. IEEE Signal Processing Magazine, 28(4), 24–39. https://doi.org/10.1109/MSP.2011.940880
  3. Weihs, C., Friedrichs, K., & Bischl, B. (2011). Statistics for hearing aids: Auralization. Second Bilateral German-Polish Symposium on Data Analysis and Its Applications (GPSDAA).
  4. Koch, P., Bischl, B., Flasch, O., Bartz-Beielstein, T., & Konen, W. (2011). On the Tuning and Evolution of Support Vector Kernels. Research Center CIOP (Computational Intelligence, Optimization and Data Mining).
    link

2010

  1. Bischl, B., Vatolkin, I., & Preuss, M. (2010). Selecting Small Audio Feature Sets in Music Classification by Means of Asymmetric Mutation. Parallel Problem Solving from Nature, PPSN XI, 6238, 314–323.
    link
  2. Szepannek, G., Gruhne, M., Bischl, B., Krey, S., Harczos, T., Klefenz, F., Dittmar, C., & Weihs, C. (201AD). Perceptually Based Phoneme Recognition in Popular Music. In H. Locarek-Junge & C. Weihs (Eds.), Classification as a Tool for Research (Vol. 40, pp. 751–758). Springer. https://doi.org/10.1007/978-3-642-10745-0_83
  3. Bischl, B., Mersmann, O., & Trautmann, H. (2010). Resampling Methods in Model Validation. In T. Bartz-Beielstein, M. Chiarandini, L. Paquete, & M. Preuss (Eds.), WEMACS – Proceedings of the Workshop on Experimental Methods for the Assessment of Computational Systems, Technical Report TR 10-2-007. Department of Computer Science, TU Dortmund University.
    link
  4. Bischl, B., Eichhoff, M., & Weihs, C. (2010). Selecting Groups of Audio Features by Statistical Tests and the Group Lasso. 9. ITG Fachtagung Sprachkommunikation.
    link

2009

  1. Bischl, B., Ligges, U., & Weihs, C. (6AD). Frequency estimation by DFT interpolation: A comparison of methods. SFB 475, Faculty of Statistics, TU Dortmund, Germany.
    link | pdf
  2. Szepannek, G., Bischl, B., & Weihs, C. (2009). On the combination of locally optimal pairwise classifiers. Engineering Applications of Artificial Intelligence, 22(1), 79–85. https://doi.org/https://doi.org/10.1016/j.engappai.2008.04.009

2008

  1. Szepannek, G., Bischl, B., & Weihs, C. (2008). On the Combination of Locally Optimal Pairwise Classifiers. Journal of Engineering Applications of Artificial Intelligence, 22(1), 79–85.
    link

2007