Research

Research groups

Publications

A full list of publications in BibTex format is available here

[2022, 2021, 2020, 2019, 2018, 2017, 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007]

2022

  1. Bothmann, L. (2022). Künstliche Intelligenz in der Strafverfolgung. In K. Peters (Ed.), Cyberkriminalität. LMU Munich.
    link
  2. Ghada, W., Casellas, E., Herbinger, J., Garcia-Benadí, A., Bothmann, L., Estrella, N., Bech, J., & Menzel, A. (2022). Stratiform and Convective Rain Classification Using Machine Learning Models and Micro Rain Radar. Remote Sensing, 14(18).
    link
  3. Ott, F., Rügamer, D., Heublein, L., Hamann, T., Barth, J., Bischl, B., & Mutschler, C. (2022). Benchmarking Online Sequence-to-Sequence and Character-based Handwriting Recognition from IMU-Enhanced Pens. International Journal on Document Analysis and Recognition (IJDAR).
    link|pdf
  4. Schiele, P., Berninger, C., & Rügamer, D. (2022). ARMA Cell: A Modular and Effective Approach for Neural Autoregressive Modeling. ArXiv Preprint ArXiv:2208.14919.
    link|pdf
  5. Schalk, D., Bischl, B., & Rügamer, D. (2022). Accelerated Componentwise Gradient Boosting using Efficient Data Representation and Momentum-based Optimization. Journal of Computational and Graphical Statistics.
    link | pdf
  6. Rath, K., Rügamer, D., Bischl, B., von Toussaint, U., Rea, C., Maris, A., Granetz, R., & Albert, C. (2022). Data augmentation for disruption prediction via robust surrogate models. Journal of Plasma Physics.
  7. Dandl, S., Pfisterer, F., & Bischl, B. (2022). Multi-Objective Counterfactual Fairness. Proceedings of the Genetic and Evolutionary Computation Conference Companion, 328–331.
    link
  8. Mittermeier, M., Weigert, M., Rügamer, D., Küchenhoff, H., & Ludwig, R. (2022). A Deep Learning Version of Hess & Brezowskys Classification of Großwetterlagen over Europe: Projection of Future Changes in a CMIP6 Large Ensemble. Environmental Research Letters.
  9. Ott, F., Rügamer, D., Heublein, L., Bischl, B., & Mutschler, C. (2022, June 29). Domain Adaptation for Time-Series Classification to Mitigate Covariate Shift. ACM Multimedia.
    link|pdf
  10. Dandl, S., Hothorn, T., Seibold, H., Sverdrup, E., Wager, S., & Zeileis, A. (2022). What Makes Forest-Based Heterogeneous Treatment Effect Estimators Work? In arXiv:2206.10323.
    link
  11. Rügamer, D., Bender, A., Wiegrebe, S., Racek, D., Bischl, B., Müller, C., & Stachl, C. (2022, June 14). Factorized Structured Regression for Large-Scale Varying Coefficient Models. Machine Learning and Knowledge Discovery in Databases (ECML-PKDD).
    link|pdf
  12. Beaudry, G., Drouin, O., Gravel, J., Smyrnova, A., Bender, A., Orri, M., Geoffroy, M.-C., & Chadi, N. (2022). A Comparative Analysis of Pediatric Mental Health-Related Emergency Department Utilization in Montréal, Canada, before and during the COVID-19 Pandemic. Annals of General Psychiatry, 21(1), 17.
    link|pdf
  13. Klaß, A., Lorenz, S., Lauer-Schmaltz, M., Rügamer, D., Bischl, B., Mutschler, C., & Ott, F. (2022, June 4). Uncertainty-aware Evaluation of Time-Series Classification for Online Handwriting Recognition with Domain Shift. IJCAI-ECAI 2022, 1st International Workshop on Spatio-Temporal Reasoning and Learning.
  14. Fritz, C., Nicola, G. D., Günther, F., Rügamer, D., Rave, M., Schneble, M., Bender, A., Weigert, M., Brinks, R., Hoyer, A., Berger, U., Küchenhoff, H., & Kauermann, G. (2022). Challenges in Interpreting Epidemiological Surveillance Data - Experiences from Germany. Journal of Computational & Graphical Statistics.
  15. Rügamer, D. (2022). Additive Higher-Order Factorization Machines. ArXiv Preprint ArXiv:2205.14515.
    link|pdf
  16. Bothmann, L., Peters, K., & Bischl, B. (2022). What Is Fairness? Implications For FairML. ArXiv:2205.09622 [Cs, Stat].
    link
  17. Rügamer, D., Kolb, C., Fritz, C., Pfisterer, F., Kopper, P., Bischl, B., Shen, R., Bukas, C., de Andrade e Sousa, L. B., Thalmeier, D., Baumann, P., Kook, L., Klein, N., & Müller, C. L. (2022). deepregression: a Flexible Neural Network Framework for Semi-Structured Deep Distributional Regression. Journal of Statistical Software (Provisionally Accepted).
    link|pdf
  18. Schalk, D., Hoffmann, V., Bischl, B., & Mansmann, U. (2022). Distributed non-disclosive validation of predictive models by a modified ROC-GLM. ArXiv Preprint ArXiv:2202.10828.
    link | pdf
  19. Liew, B. X. W., Kovacs, F. M., Rügamer, D., & Royuela, A. (2022). Machine learning for prognostic modelling in individuals with non-specific neck pain. European Spine Journal.
  20. Fritz, C., Dorigatti, E., & Rügamer, D. (2022). Combining Graph Neural Networks and Spatio-temporal Disease Models to Predict COVID-19 Cases in Germany. Scientific Reports, 12, 2045–2322.
    link|pdf
  21. Sonabend, R., Bender, A., & Vollmer, S. (2022). Avoiding C-hacking When Evaluating Survival Distribution Predictions with Discrimination Measures (Number arXiv:2112.04828). arXiv.
    link|pdf
  22. Rügamer, D., Baumann, P., & Greven, S. (2022). Selective Inference for Additive and Mixed Models. Computational Statistics and Data Analysis, 167, 107350.
    link|pdf
  23. Ott, F., Rügamer, D., Heublein, L., Bischl, B., & Mutschler, C. (2022). Cross-Modal Common Representation Learning with Triplet Loss Functions. ArXiv Preprint ArXiv:2202.07901.
    link|pdf
  24. Rügamer, D., Baumann, P. F. M., Kneib, T., & Hothorn, T. (2022). Probabilistic Time Series Forecasts with Autoregressive Transformation Models. ArXiv:2110.08248 [Cs, Stat].
    link|pdf
  25. Dorigatti, E., Goschenhofer, J., Schubert, B., Rezaei, M., & Bischl, B. (2022). Positive-Unlabeled Learning with Uncertainty-aware Pseudo-label Selection. ArXiv Preprint ArXiv:2109.05232.
    link|pdf
  26. Kopper, P., Wiegrebe, S., Bischl, B., Bender, A., & Rügamer, D. (2022). DeepPAMM: Deep Piecewise Exponential Additive Mixed Models for Complex Hazard Structures in Survival Analysis. Advances in Knowledge Discovery and Data Mining, 249–261.
    link|pdf
  27. Hartl, W. H., Kopper, P., Bender, A., Scheipl, F., Day, A. G., Elke, G., & Küchenhoff, H. (2022). Protein intake and outcome of critically ill patients: analysis of a large international database using piece-wise exponential additive mixed models. Critical Care, 26, 7.
    link|pdf
  28. Scholbeck, C. A., Funk, H., & Casalicchio, G. (2022). Algorithm-Agnostic Interpretations for Clustering.
    link
  29. Schneider, L., Schäpermeier, L., Prager, R. P., Bischl, B., Trautmann, H., & Kerschke, P. (2022). HPO X ELA: Investigating Hyperparameter Optimization Landscapes by Means of Exploratory Landscape Analysis. In G. Rudolph, A. V. Kononova, H. Aguirre, P. Kerschke, G. Ochoa, & T. Tušar (Eds.), Parallel Problem Solving from Nature – PPSN XVII (pp. 575–589). Springer International Publishing.
  30. Gijsbers, P., Bueno, M. L. P., Coors, S., LeDell, E., Poirier, S., Thomas, J., Bischl, B., & Vanschoren, J. (2022). AMLB: an AutoML Benchmark. ArXiv Preprint ArXiv:2207.12560.
    link | pdf
  31. Böhme, R., Coors, S., Oster, P., Munser-Kiefer, M., & Hilbert, S. (2022). Machine learning for spelling acquisition - How accurate is the prediction of specific spelling errors in German primary school students? PsyArXiv. https://doi.org/10.31234/osf.io/shguf
  32. Karl, F., Pielok, T., Moosbauer, J., Pfisterer, F., Coors, S., Binder, M., Schneider, L., Thomas, J., Richter, J., Lang, M., & others. (2022). Multi-Objective Hyperparameter Optimization – An Overview. ArXiv Preprint ArXiv:2206.07438.
    link | pdf
  33. Schneider, L., Pfisterer, F., Thomas, J., & Bischl, B. (2022). A Collection of Quality Diversity Optimization Problems Derived from Hyperparameter Optimization of Machine Learning Models. Proceedings of the Genetic and Evolutionary Computation Conference Companion, 2136–2142.
    link | pdf
  34. Pargent, F., Pfisterer, F., Thomas, J., & Bischl, B. (2022). Regularized target encoding outperforms traditional methods in supervised machine learning with high cardinality features. Computational Statistics, 1–22.
    link | pdf
  35. Schneider, L., Pfisterer, F., Kent, P., Branke, J., Bischl, B., & Thomas, J. (2022). Tackling Neural Architecture Search With Quality Diversity Optimization. First Conference on Automated Machine Learning (Main Track).
    link | pdf
  36. Koch, P., Aßenmacher, M., & Heumann, C. (2022). Pre-trained language models evaluating themselves - A comparative study. Proceedings of the Third Workshop on Insights from Negative Results in NLP, 180–187.
    link|pdf
  37. Herbinger, J., Bischl, B., & Casalicchio, G. (2022). REPID: Regional Effect Plots with implicit Interaction Detection. International Conference on Artificial Intelligence and Statistics (AISTATS) , 25.
    link | pdf
  38. Scholbeck, C. A., Casalicchio, G., Molnar, C., Bischl, B., & Heumann, C. (2022). Marginal Effects for Non-Linear Prediction Functions.
    link
  39. Pfisterer, F., Schneider, L., Moosbauer, J., Binder, M., & Bischl, B. (2022). YAHPO Gym – An Efficient Multi-Objective Multi-Fidelity Benchmark for Hyperparameter Optimization. First Conference on Automated Machine Learning (Main Track).
    link | pdf
  40. Molnar, C., König, G., Herbinger, J., Freiesleben, T., Dandl, S., Scholbeck, C. A., Casalicchio, G., Grosse-Wentrup, M., & Bischl, B. (2022). General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models. In xxAI - Beyond Explainable AI (pp. 39–68). Springer International Publishing.
    link | pdf

2021

  1. Hilbert, S., Coors, S., Kraus, E., Bischl, B., Lindl, A., Frei, M., Wild, J., Krauss, S., Goretzko, D., & Stachl, C. (2021). Machine learning for the educational sciences. Review of Education, 9(3), e3310. https://doi.org/https://doi.org/10.1002/rev3.3310
  2. Liew, B. X. W., Rügamer, D., Duffy, K., Taylor, M., & Jackson, J. (2021). The mechanical energetics of walking across the adult lifespan. PloS One, 16(11), e0259817.
    link
  3. Mittermeier, M., Weigert, M., & Rügamer, D. (2021). Identifying the atmospheric drivers of drought and heat using a smoothed deep learning approach. NeurIPS 2021, Tackling Climate Change with Machine Learning.
    link|pdf
  4. Weber, T., Ingrisch, M., Fabritius, M., Bischl, B., & Rügamer, D. (2021). Survival-oriented embeddings for improving accessibility to complex data structures. NeurIPS 2021, Bridging the Gap: From Machine Learning Research to Clinical Practice.
    link|pdf
  5. Weber, T., Ingrisch, M., Bischl, B., & Rügamer, D. (2021). Towards modelling hazard factors in unstructured data spaces using gradient-based latent interpolation. NeurIPS 2021, Deep Generative Models and Downstream Applications.
    link|pdf
  6. Rügamer, D., Baumann, P. F. M., Kneib, T., & Hothorn, T. (2021). Transforming Autoregression: Interpretable and Expressive Time Series Forecast. In arXiv:2110.08248 [cs, stat].
    link|pdf
  7. Liew, B. X. W., Rügamer, D., Zhai, X. J., Morris, S., & Netto, K. (2021). Comparing machine, deep, and transfer learning in predicting joint moments in running. Journal of Biomechanics.
  8. Ott, F., Rügamer, D., Heublein, L., Bischl, B., & Mutschler, C. (2021, October 3). Joint Classification and Trajectory Regression of Online Handwriting using a Multi-Task Learning Approach. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV).
  9. Goschenhofer, J., Hvingelby, R., Rügamer, D., Thomas, J., Wagner, M., & Bischl, B. (2021, September 18). Deep Semi-Supervised Learning for Time Series Classification. 20th IEEE International Conference on Machine Learning and Applications (ICMLA).
    link | pdf
  10. Python, A., Bender, A., Blangiardo, M., Illian, J. B., Lin, Y., Liu, B., Lucas, T. C. D., Tan, S., Wen, Y., Svanidze, D., & Yin, J. (2021). A Downscaling Approach to Compare COVID-19 Count Data from Databases Aggregated at Different Spatial Scales. Journal of the Royal Statistical Society: Series A (Statistics in Society). https://doi.org/10.1111/rssa.12738
  11. Bauer, A., Klima, A., Gauß, J., Kümpel, H., Bender, A., & Küchenhoff, H. (2021). Mundus Vult Decipi, Ergo Decipiatur: Visual Communication of Uncertainty in Election Polls. PS: Political Science & Politics, 1–7. https://doi.org/10.1017/S1049096521000950
  12. Rezaei, M., Soleymani, F., Bischl, B., & Azizi, S. (2021). Deep Bregman Divergence for Contrastive Learning of Visual Representations. ArXiv Preprint ArXiv:2109.07455.
  13. Rezaei, M., Dorigatti, E., Rügamer, D., & Bischl, B. (2021). Learning Statistical Representation with Joint Deep Embedded Clustering. ArXiv Preprint ArXiv:2109.05232.
  14. Soleymani, F., Eslami, M., Elze, T., Bischl, B., & Rezaei, M. (2021). Deep Variational Clustering Framework for Self-labeling of Large-scale Medical Images. ArXiv Preprint ArXiv:2109.10777.
  15. Bischl, B., Casalicchio, G., Feurer, M., Gijsbers, P., Hutter, F., Lang, M., Mantovani, R. G., van Rijn, J. N., & Vanschoren, J. (2021). OpenML Benchmarking Suites. NeurIPS 2021, Datasets and Benchmarks Track.
    link | pdf
  16. Fabritius, M. P., Seidensticker, M., Rueckel, J., Heinze, C., Pech, M., Paprottka, K. J., Paprottka, P. M., Topalis, J., Bender, A., Ricke, J., Mittermeier, A., & Ingrisch, M. (2021). Bi-Centric Independent Validation of Outcome Prediction after Radioembolization of Primary and Secondary Liver Cancer. Journal of Clinical Medicine, 10(16), 3668. https://doi.org/10.3390/jcm10163668
  17. Pfisterer, F., Kern, C., Dandl, S., Sun, M., Kim, M. P., & Bischl, B. (2021). mcboost: Multi-Calibration Boosting for R. Journal of Open Source Software, 6(64), 3453. https://doi.org/10.21105/joss.03453
  18. Bothmann, L., Strickroth, S., Casalicchio, G., Rügamer, D., Lindauer, M., Scheipl, F., & Bischl, B. (2021). Developing Open Source Educational Resources for Machine Learning and Data Science. ArXiv:2107.14330 [Cs, Stat].
    link
  19. Falla, D., Devecchi, V., Jimenez-Grande, D., Rügamer, D., & Liew, B. (2021). Modern Machine Learning Approaches Applied in Spinal Pain Research. In Journal of Electromyography and Kinesiology.
  20. *Coors, S., *Schalk, D., Bischl, B., & Rügamer, D. (2021). Automatic Componentwise Boosting: An Interpretable AutoML System. ECML-PKDD Workshop on Automating Data Science.
    link | pdf
  21. Bischl, B., Binder, M., Lang, M., Pielok, T., Richter, J., Coors, S., Thomas, J., Ullmann, T., Becker, M., Boulesteix, A.-L., Deng, D., & Lindauer, M. (2021). Hyperparameter Optimization: Foundations, Algorithms, Best Practices and Open Challenges. ArXiv Preprint ArXiv:2107.05847.
    link | pdf
  22. Berninger, C., Stöcker, A., & Rügamer, D. (2021). A Bayesian Time-Varying Autoregressive Model for Improved Short- and Long-Term Prediction. Journal of Forecasting.
    link|pdf
  23. Python, A., Bender, A., Nandi, A. K., Hancock, P. A., Arambepola, R., Brandsch, J., & Lucas, T. C. D. (2021). Predicting non-state terrorism worldwide. Science Advances, 7(31), eabg4778. https://doi.org/10.1126/sciadv.abg4778
  24. Baumann, P. F. M., Hothorn, T., & Rügamer, D. (2021). Deep Conditional Transformation Models. Machine Learning and Knowledge Discovery in Databases. Research Track, 3–18.
    link|pdf
  25. König, G., Freiesleben, T., Bischl, B., Casalicchio, G., & Grosse-Wentrup, M. (2021). Decomposition of Global Feature Importance into Direct and Associative Components (DEDACT).
    link
  26. Ramjith, J., Bender, A., Roes, K. C. B., & Jonker, M. A. (2021). Recurrent Events Analysis with Piece-wise exponential Additive Mixed Models. Research Square. https://doi.org/10.21203/rs.3.rs-563303/v1
  27. Pfisterer, F., van Rijn, J. N., Probst, P., Müller, A., & Bischl, B. (2021). Learning Multiple Defaults for Machine Learning Algorithms. 2021 Genetic and Evolutionary Computation Conference Companion (GECCO ’21 Companion). https://doi.org/10.1145/3449726.3459532
  28. Gijsbers, P., Pfisterer, F., van Rijn, J. N., Bischl, B., & Vanschoren, J. (2021). Meta-Learning for Symbolic Hyperparameter Defaults. In 2021 Genetic and Evolutionary Computation Conference Companion (GECCO ’21 Companion). ACM. https://doi.org/10.1145/3449726.3459532
  29. Rath, K., Albert, C. G., Bischl, B., & von Toussaint, U. (2021). Symplectic Gaussian process regression of maps in Hamiltonian systems. Chaos: An Interdisciplinary Journal of Nonlinear Science, 31(5), 053121. https://doi.org/10.1063/5.0048129
  30. Kopper, P., Pölsterl, S., Wachinger, C., Bischl, B., Bender, A., & Rügamer, D. (2021). Semi-Structured Deep Piecewise Exponential Models. In R. Greiner, N. Kumar, T. A. Gerds, & M. van der Schaar (Eds.), Proceedings of AAAI Spring Symposium on Survival Prediction - Algorithms, Challenges, and Applications 2021 (Vol. 146, pp. 40–53). PMLR.
    link|pdf
  31. König, G., Molnar, C., Bischl, B., & Grosse-Wentrup, M. (2021). Relative Feature Importance. 2020 25th International Conference on Pattern Recognition (ICPR), 9318–9325.
    link | pdf
  32. Gerostathopoulos, I., Plášil, F., Prehofer, C., Thomas, J., & Bischl, B. (2021). Automated Online Experiment-Driven Adaptation–Mechanics and Cost Aspects. IEEE Access, 9, 58079–58087.
    link | pdf
  33. Liew, B., Lee, H. Y., Rügamer, D., Nunzio, A. M. D., Heneghan, N. R., Falla, D., & Evans, D. W. (2021). A novel metric of reliability in pressure pain threshold measurement. Scientific Reports (Nature).
  34. Küchenhoff, H., Günther, F., Höhle, M., & Bender, A. (2021). Analysis of the early COVID-19 epidemic curve in Germany by regression models with change points. Epidemiology & Infection, 1–17. https://doi.org/10.1017/S0950268821000558
  35. Bender, A., Rügamer, D., Scheipl, F., & Bischl, B. (2021). A General Machine Learning Framework for Survival Analysis. In F. Hutter, K. Kersting, J. Lijffijt, & I. Valera (Eds.), Machine Learning and Knowledge Discovery in Databases (pp. 158–173). Springer International Publishing. https://doi.org/10.1007/978-3-030-67664-3_10
  36. Sonabend, R., Király, F. J., Bender, A., Bischl, B., & Lang, M. (2021). mlr3proba: An R Package for Machine Learning in Survival Analysis. Bioinformatics, btab039. https://doi.org/10.1093/bioinformatics/btab039
  37. Agrawal, A., Pfisterer, F., Bischl, B., Chen, J., Sood, S., Shah, S., Buet-Golfouse, F., Mateen, B. A., & Vollmer, S. J. (2021). Debiasing classifiers: is reality at variance with expectation? Available at SSRN 3711681.
    link
  38. Kaminwar, S. R., Goschenhofer, J., Thomas, J., Thon, I., & Bischl, B. (2021). Structured Verification of Machine Learning Models in Industrial Settings. Big Data.
    link
  39. Molnar, C., Freiesleben, T., König, G., Casalicchio, G., Wright, M. N., & Bischl, B. (2021). Relating the Partial Dependence Plot and Permutation Feature Importance to the Data Generating Process. ArXiv Preprint ArXiv:2109.01433.
    link | pdf
  40. Moosbauer, J., Herbinger, J., Casalicchio, G., Lindauer, M., & Bischl, B. (2021). Explaining Hyperparameter Optimization via Partial Dependence Plots. Advances in Neural Information Processing Systems (NeurIPS 2021), 34.
    link | pdf
  41. Moosbauer, J., Herbinger, J., Casalicchio, G., Lindauer, M., & Bischl, B. (2021). Towards Explaining Hyperparameter Optimization via Partial Dependence Plots. 8th ICML Workshop on Automated Machine Learning (AutoML).
    link | pdf
  42. Moosbauer, J., Binder, M., Schneider, L., Pfisterer, F., Becker, M., Lang, M., Kotthoff, L., & Bischl, B. (2021). Automated Benchmark-Driven Design and Explanation of Hyperparameter Optimizers.
    link | pdf
  43. Binder, M., Pfisterer, F., Lang, M., Schneider, L., Kotthoff, L., & Bischl, B. (2021). mlr3pipelines - Flexible Machine Learning Pipelines in R. Journal of Machine Learning Research, 22(184), 1–7.
    link | pdf
  44. Schneider, L., Pfisterer, F., Binder, M., & Bischl, B. (2021). Mutation is all you need. 8th ICML Workshop on Automated Machine Learning.
    pdf
  45. Becker, M., Binder, M., Bischl, B., Lang, M., Pfisterer, F., Reich, N. G., Richter, J., Schratz, P., & Sonabend, R. (2021). mlr3 book.
    link
  46. Au, Q., Herbinger, J., Stachl, C., Bischl, B., & Casalicchio, G. (2021). Grouped Feature Importance and Combined Features Effect Plot. ArXiv Preprint ArXiv:2104.11688.
    link | pdf

2020

  1. Günther, F., Bender, A., Katz, K., Küchenhoff, H., & Höhle, M. (2020). Nowcasting the COVID-19 pandemic in Bavaria. Biometrical Journal.
    link|pdf
  2. Liew, B. X. W., Peolsson, A., Rügamer, D., Wibault, J., Löfgren, H., Dedering, A., Zsigmond, P., & Falla, D. (2020). Clinical predictive modelling of post-surgical recovery in individuals with cervical radiculopathy – a machine learning approach. Scientific Reports.
    link
  3. Rügamer, D., Pfisterer, F., & Bischl, B. (2020). Neural Mixture Distributional Regression. ArXiv:2010.06889 [Cs, Stat].
    link|pdf
  4. Guenther, F., Bender, A., Höhle, M., Wildner, M., & Küchenhoff, H. (2020). Analysis of the COVID-19 pandemic in Bavaria: adjusting for misclassification. MedRxiv, 2020.09.29.20203877. https://doi.org/10.1101/2020.09.29.20203877
  5. Dandl, S., Molnar, C., Binder, M., & Bischl, B. (2020). Multi-Objective Counterfactual Explanations. In T. Bäck, M. Preuss, A. Deutz, H. Wang, C. Doerr, M. Emmerich, & H. Trautmann (Eds.), Parallel Problem Solving from Nature – PPSN XVI (pp. 448–469). Springer International Publishing.
    link
  6. Schratz, P., Muenchow, J., Iturritxa, E., Cortés, J., Bischl, B., & Brenning, A. (2020). Monitoring forest health using hyperspectral imagery: Does feature selection improve the performance of machine-learning techniques?
    link
  7. Bender, A., Python, A., Lindsay, S. W., Golding, N., & Moyes, C. L. (2020). Modelling geospatial distributions of the triatomine vectors of Trypanosoma cruzi in Latin America. PLOS Neglected Tropical Diseases, 14(8), e0008411. https://doi.org/10.1371/journal.pntd.0008411
  8. Binder, M., Pfisterer, F., & Bischl, B. (2020, July 18). Collecting Empirical Data About Hyperparameters for Data Driven AutoML. AutoML Workshop at ICML 2020.
    pdf
  9. Binder, M., Moosbauer, J., Thomas, J., & Bischl, B. (2020). Multi-Objective Hyperparameter Tuning and Feature Selection Using Filter Ensembles. Proceedings of the 2020 Genetic and Evolutionary Computation Conference, 471–479. https://doi.org/10.1145/3377930.3389815
  10. Beggel, L., Pfeiffer, M., & Bischl, B. (2020). Robust Anomaly Detection in Images using Adversarial Autoencoders. Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 206–222.
    link | pdf
  11. Dorigatti, E., & Schubert, B. (2020). Joint epitope selection and spacer design for string-of-beads vaccines. BioRxiv. https://doi.org/10.1101/2020.04.25.060988
  12. Pfister, F. M. J., Um, T. T., Pichler, D. C., Goschenhofer, J., Abedinpour, K., Lang, M., Endo, S., Ceballos-Baumann, A. O., Hirche, S., Bischl, B., & others. (2020). High-Resolution Motor State Detection in parkinson’s Disease Using convolutional neural networks. Scientific Reports, 10(1), 1–11.
    link
  13. Scholbeck, C. A., Molnar, C., Heumann, C., Bischl, B., & Casalicchio, G. (2020). Sampling, Intervention, Prediction, Aggregation: A Generalized Framework for Model-Agnostic Interpretations. In P. Cellier & K. Driessens (Eds.), Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2019 (pp. 205–216). Springer International Publishing.
    link | pdf
  14. Goerigk, S., Hilbert, S., Jobst, A., Falkai, P., Bühner, M., Stachl, C., Bischl, B., Coors, S., Ehring, T., Padberg, F., & Sarubin, N. (2020). Predicting instructed simulation and dissimulation when screening for depressive symptoms. European Archives of Psychiatry and Clinical Neuroscience, 270(2), 153–168.
    link
  15. Rügamer, D., Kolb, C., & Klein, N. (2020). A Unified Network Architecture for Semi-Structured Deep Distributional Regression. ArXiv:2002.05777 [Cs, Stat].
    link|pdf
  16. Molnar, C., König, G., Bischl, B., & Casalicchio, G. (2020). Model-agnostic Feature Importance and Effects with Dependent Features–A Conditional Subgroup Approach. ArXiv Preprint ArXiv:2006.04628.
    link | pdf
  17. Molnar, C., König, G., Herbinger, J., Freiesleben, T., Dandl, S., Scholbeck, C. A., Casalicchio, G., Grosse-Wentrup, M., & Bischl, B. (2020). Pitfalls to Avoid when Interpreting Machine Learning Models. ICML Workshop XXAI: Extending Explainable AI Beyond Deep Models and Classifiers.
    link | pdf
  18. Molnar, C., Casalicchio, G., & Bischl, B. (2020). Quantifying Model Complexity via Functional Decomposition for Better Post-hoc Interpretability. In P. Cellier & K. Driessens (Eds.), Machine Learning and Knowledge Discovery in Databases (pp. 193–204). Springer International Publishing. link | pdf
  19. Molnar, C., Casalicchio, G., & Bischl, B. (2020). Interpretable Machine Learning – A Brief History, State-of-the-Art and Challenges. In I. Koprinska, M. Kamp, A. Appice, C. Loglisci, L. Antonie, A. Zimmermann, R. Guidotti, Ö. Özgöbek, R. P. Ribeiro, R. Gavaldà, J. Gama, L. Adilova, Y. Krishnamurthy, P. M. Ferreira, D. Malerba, I. Medeiros, M. Ceci, G. Manco, E. Masciari, … J. A. Gulla (Eds.), ECML PKDD 2020 Workshops (pp. 417–431). Springer International Publishing.
    link | pdf
  20. Stachl, C., Au, Q., Schoedel, R., Gosling, S. D., Harari, G. M., Buschek, D., Völkel, S. T., Schuwerk, T., Oldemeier, M., Ullmann, T., & others. (2020). Predicting personality from patterns of behavior collected with smartphones. Proceedings of the National Academy of Sciences.
    link | pdf
  21. Bommert, A., Sun, X., Bischl, B., Rahnenführer, J., & Lang, M. (2020). Benchmark for filter methods for feature selection in high-dimensional classification data. Computational Statistics & Data Analysis, 143, 106839.
    link | pdf
  22. Sun, X., Bommert, A., Pfisterer, F., Rähenfürher, J., Lang, M., & Bischl, B. (2020). High Dimensional Restrictive Federated Model Selection with Multi-objective Bayesian Optimization over Shifted Distributions. In Y. Bi, R. Bhatia, & S. Kapoor (Eds.), Intelligent Systems and Applications (pp. 629–647). Springer International Publishing. https://doi.org/10.1007/978-3-030-29516-5_48
  23. Rügamer, D., & Greven, S. (2020). Inference for L2-Boosting. Statistics and Computing, 30, 279–289.
    link|pdf
  24. Liew, B. X. W., Rügamer, D., Stöcker, A., & De Nunzio, A. M. (2020). Classifying neck pain status using scalar and functional biomechanical variables – development of a method using functional data boosting. Gait & Posture, 75, 146–150.
    link
  25. Liew, B., Rügamer, D., De Nunzio, A., & Falla, D. (2020). Interpretable machine learning models for classifying low back pain status using functional physiological variables. European Spine Journal, 29, 1845–1859.
    link
  26. Liew, B. X. W., Rügamer, D., Abichandani, D., & De Nunzio, A. M. (2020). Classifying individuals with and without patellofemoral pain syndrome using ground force profiles – Development of a method using functional data boosting. Gait & Posture, 80, 90–95.
    link
  27. Ellenbach, N., Boulesteix, A.-L., Bischl, B., Unger, K., & Hornung, R. (2020). Improved Outcome Prediction Across Data Sources Through Robust Parameter Tuning. Journal of Classification, 1–20.
    link|pdf
  28. Brockhaus, S., Rügamer, D., & Greven, S. (2020). Boosting Functional Regression Models with FDboost. Journal of Statistical Software, 94(10), 1–50.
  29. Dorigatti, E., & Schubert, B. (2020). Graph-theoretical formulation of the generalized epitope-based vaccine design problem. PLOS Computational Biology, 16(10), e1008237. https://doi.org/10.1371/journal.pcbi.1008237

2019

  1. Lang, M., Binder, M., Richter, J., Schratz, P., Pfisterer, F., Coors, S., Au, Q., Casalicchio, G., Kotthoff, L., & Bischl, B. (2019). mlr3: A modern object-oriented machine learning framework in R. Journal of Open Source Software, 4(44), 1903.
    link | pdf
  2. Sun, X., & Bischl, B. (2019, December 6). Tutorial and Survey on Probabilistic Graphical Model and Variational Inference in Deep Reinforcement Learning. 2019 IEEE Symposium Series on Computational Intelligence (SSCI).
    link|pdf
  3. Pfisterer, F., Thomas, J., & Bischl, B. (2019). Towards Human Centered AutoML. In arXiv preprint arXiv:1911.02391.
    link | pdf
  4. Pfisterer, F., Beggel, L., Sun, X., Scheipl, F., & Bischl, B. (2019). Benchmarking time series classification – Functional data vs machine learning approaches. In arXiv preprint arXiv:1911.07511.
    link | pdf
  5. Schmid, M., Bischl, B., & Kestler, H. A. (2019). Proceedings of Reisensburg 2016–2017. Springer.
    link
  6. Beggel, L., Kausler, B. X., Schiegg, M., Pfeiffer, M., & Bischl, B. (2019). Time series anomaly detection based on shapelet learning. Computational Statistics, 34(3), 945–976.
    link | pdf
  7. Pfisterer, F., Coors, S., Thomas, J., & Bischl, B. (2019). Multi-Objective Automatic Machine Learning with AutoxgboostMC. In arXiv preprint arXiv:1908.10796.
    link | pdf
  8. Sun, X., Lin, J., & Bischl, B. (2019). ReinBo: Machine Learning pipeline search and configuration with Bayesian Optimization embedded Reinforcement Learning. CoRR, abs/1904.05381.
    link | pdf
  9. Au, Q., Schalk, D., Casalicchio, G., Schoedel, R., Stachl, C., & Bischl, B. (2019). Component-Wise Boosting of Targets for Multi-Output Prediction. ArXiv Preprint ArXiv:1904.03943.
    link | pdf
  10. Probst, P., Boulesteix, A.-L., & Bischl, B. (2019). Tunability: Importance of Hyperparameters of Machine Learning Algorithms. Journal of Machine Learning Research, 20(53), 1–32.
    link | pdf
  11. Casalicchio, G., Molnar, C., & Bischl, B. (2019). Visualizing the Feature Importance for Black Box Models. In M. Berlingerio, F. Bonchi, T. Gärtner, N. Hurley, & G. Ifrim (Eds.), Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2018 (pp. 655–670). Springer International Publishing.
    link | pdf
  12. Völkel, S. T., Schödel, R., Buschek, D., Stachl, C., Au, Q., Bischl, B., Bühner, M., & Hussmann, H. (2019). Opportunities and challenges of utilizing personality traits for personalization in HCI. Personalized Human-Computer Interaction, 31–65.
    link
  13. Gijsbers, P., LeDell, E., Thomas, J., Poirier, S., Bischl, B., & Vanschoren, J. (2019). An Open Source AutoML Benchmark. CoRR, abs/1907.00909.
    link | pdf
  14. Sun, X., Wang, Y., Gossmann, A., & Bischl, B. (2019). Resampling-based Assessment of Robustness to Distribution Shift for Deep Neural Networks. CoRR, abs/1906.02972.
    link | pdf
  15. Pfister, F. M. J., von Schumann, A., Bemetz, J., Thomas, J., Ceballos-Baumann, A., Bischl, B., & Fietzek, U. (2019). Recognition of subjects with early-stage Parkinson from free-living unilateral wrist-sensor data using a hierarchical machine learning model. JOURNAL OF NEURAL TRANSMISSION, 126(5), 663–663.
  16. Schüller, N., Boulesteix, A.-L., Bischl, B., Unger, K., & Hornung, R. (2019). Improved outcome prediction across data sources through robust parameter tuning (Vol. 221).
    link | pdf
  17. Stachl, C., Au, Q., Schoedel, R., Buschek, D., Völkel, S., Schuwerk, T., Oldemeier, M., Ullmann, T., Hussmann, H., Bischl, B., & Bühner, M. (2019). Behavioral Patterns in Smartphone Usage Predict Big Five Personality Traits. https://doi.org/10.31234/osf.io/ks4vd
  18. Schuwerk, T., Kaltefleiter, L. J., Au, J.-Q., Hoesl, A., & Stachl, C. (2019). Enter the Wild: Autistic Traits and Their Relationship to Mentalizing and Social Interaction in Everyday Life. Journal of Autism and Developmental Disorders. https://doi.org/10.1007/s10803-019-04134-6
  19. König, G., & Grosse-Wentrup, M. (2019). A Causal Perspective on Challenges for AI in Precision Medicine.
    link
  20. Goschenhofer, J., Pfister, F. M. J., Yuksel, K. A., Bischl, B., Fietzek, U., & Thomas, J. (2019). Wearable-based Parkinson’s Disease Severity Monitoring using Deep Learning. Machine Learning and Knowledge Discovery in Databases, ECML PKDD 2019, 400–415.
    link | pdf
  21. Sun, X., Gossmann, A., Wang, Y., & Bischt, B. (2019). Variational Resampling Based Assessment of Deep Neural Networks under Distribution Shift. 2019 IEEE Symposium Series on Computational Intelligence (SSCI), 1344–1353.
    link|pdf

2018

  1. van Rijn, J. N., Pfisterer, F., Thomas, J., Bischl, B., & Vanschoren, J. (2018, December 8). Meta Learning for Defaults–Symbolic Defaults. NeurIPS 2018 Workshop on Meta Learning.
    link | pdf
  2. Arenas, D., Barp, E., Bohner, G., Churvay, V., Kiraly, F., Lienart, T., Vollmer, S., Innes, M., & Bischl, B. (2018). Workshop contribution MLJ.
    pdf
  3. Molnar, C., Casalicchio, G., & Bischl, B. (2018). iml: An R package for Interpretable Machine Learning. The Journal of Open Source Software, 3, 786. link | pdf
  4. Kestler, H. A., Bischl, B., & Schmid, M. (2018). Proceedings of Reisensburg 2014–2015. Springer.
    link
  5. Bender, A., & Scheipl, F. (2018). pammtools: Piece-wise exponential Additive Mixed Modeling tools. ArXiv:1806.01042 [Stat].
    link| pdf
  6. Fossati, M., Dorigatti, E., & Giuliano, C. (2018). N-ary relation extraction for simultaneous T-Box and A-Box knowledge base augmentation. Semantic Web, 9(4), 413–439. https://doi.org/10.3233/SW-170269
  7. Thomas, J., Mayr, A., Bischl, B., Schmid, M., Smith, A., & Hofner, B. (2018). Gradient boosting for distributional regression: faster tuning and improved variable selection via noncyclical updates. Statistics and Computing, 28(3), 673–687.
    link | pdf
  8. Kühn, D., Probst, P., Thomas, J., & Bischl, B. (2018). Automatic Exploration of Machine Learning Experiments on OpenML. ArXiv Preprint ArXiv:1806.10961.
    link | pdf
  9. Thomas, J., Coors, S., & Bischl, B. (2018). Automatic Gradient Boosting. ICML AutoML Workshop.
    link | pdf
  10. Schalk, D., Thomas, J., & Bischl, B. (2018). compboost: Modular Framework for Component-Wise Boosting. JOSS, 3(30), 967.
    link | pdf
  11. Horn, D., Demircioğlu, A., Bischl, B., Glasmachers, T., & Weihs, C. (2018). A Comparative Study on Large Scale Kernelized Support Vector Machines. Advances in Data Analysis and Classification, 1–17. https://doi.org/10.1007/s11634-016-0265-7
  12. Schoedel, R., Au, Q., Völkel, S. T., Lehmann, F., Becker, D., Bühner, M., Bischl, B., Hussmann, H., & Stachl, C. (2018). Digital Footprints of Sensation Seeking. Zeitschrift Für Psychologie, 226(4), 232–245. https://doi.org/10.1027/2151-2604/a000342
  13. Völkel, S. T., Graefe, J., Schödel, R., Häuslschmid, R., Stachl, C., Au, Q., & Hussmann, H. (2018). I Drive My Car and My States Drive Me. Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications - AutomotiveUI ’18, 198–203. https://doi.org/10.1145/3239092.3267102
  14. Rügamer, D., & Greven, S. (2018). Selective inference after likelihood-or test-based model selection in linear models. Statistics & Probability Letters, 140, 7–12.
  15. Burdukiewicz, M., Karas, M., Jessen, L. E., Kosinski, M., Bischl, B., & Rödiger, S. (2018). Conference Report: Why R? 2018. The R Journal, 10(2), 572–578.
    pdf

2017

  1. Stachl, C., Hilbert, S., Au, Q., Buschek, D., De Luca, A., Bischl, B., Hussmann, H., & Bühner, M. (2017). Personality Traits Predict Smartphone Usage. European Journal of Personality, 31(6), 701–722. https://doi.org/10.1002/per.2113
  2. Cáceres, L. P., Bischl, B., & Stützle, T. (2017). Evaluating Random Forest Models for Irace. Proceedings of the Genetic and Evolutionary Computation Conference Companion, 1146–1153.
    link|pdf
  3. Casalicchio, G., Lesaffre, E., Küchenhoff, H., & Bruyneel, L. (2017). Nonlinear Analysis to Detect if Excellent Nursing Work Environments Have Highest Well-Being. Journal of Nursing Scholarship, 49(5), 537–547.
    link | pdf
  4. Casalicchio, G., Bossek, J., Lang, M., Kirchhoff, D., Kerschke, P., Hofner, B., Seibold, H., Vanschoren, J., & Bischl, B. (2017). OpenML: An R package to connect to the machine learning platform OpenML. Computational Statistics, 1–15.
    link | pdf
  5. Probst, P., Au, Q., Casalicchio, G., Stachl, C., & Bischl, B. (2017). Multilabel Classification with R Package mlr. The R Journal, 9(1), 352–369.
    link | pdf
  6. Bischl, B., Richter, J., Bossek, J., Horn, D., Thomas, J., & Lang, M. (2017). mlrMBO: A Modular Framework for Model-Based Optimization of Expensive Black-Box Functions. ArXiv Preprint ArXiv:1703.03373.
    link | pdf
  7. Horn, D., Dagge, M., Sun, X., & Bischl, B. (2017). First Investigations on Noisy Model-Based Multi-objective Optimization. In Evolutionary Multi-Criterion Optimization: 9th International Conference, EMO 2017, Münster, Germany, March 19-22, 2017, Proceedings (pp. 298–313). Springer International Publishing. https://doi.org/10.1007/978-3-319-54157-0_21
  8. Beggel, L., Sun, X., & Bischl, B. (2017). mlrFDA: an R toolbox for functional data analysis. Ulmer Informatik-Berichte, 15.
    pdf
  9. Horn, D., Bischl, B., Demircioglu, A., Glasmachers, T., Wagner, T., & Weihs, C. (2017). Multi-objective selection of algorithm portfolios. Archives of Data Science.
    link
  10. Kotthaus, H., Richter, J., Lang, A., Thomas, J., Bischl, B., Marwedel, P., Rahnenführer, J., & Lang, M. (2017). RAMBO: Resource-Aware Model-Based Optimization with Scheduling for Heterogeneous Runtimes and a Comparison with Asynchronous Model-Based Optimization. International Conference on Learning and Intelligent Optimization, 180–195.
    link | pdf
  11. Thomas, J., Hepp, T., Mayr, A., & Bischl, B. (2017). Probing for sparse and fast variable selection with model-based boosting. Computational and Mathematical Methods in Medicine, 2017.
    link | pdf
  12. Lang, M., Bischl, B., & Surmann, D. (2017). batchtools: Tools for R to work on batch systems. The Journal of Open Source Software, 2(10).
    link

2016

  1. Richter, J., Kotthaus, H., Bischl, B., Marwedel, P., Rahnenführer, J., & Lang, M. (2019, May 29). Faster Model-Based Optimization through Resource-Aware Scheduling Strategies. Proceedings of the 10th Learning and Intelligent OptimizatioN Conference (LION 10).
    link|pdf
  2. Horn, D., & Bischl, B. (2016). Multi-objective Parameter Configuration of Machine Learning Algorithms using Model-Based Optimization. 2016 IEEE Symposium Series on Computational Intelligence (SSCI), 1–8.
    link|pdf
  3. Bischl, B., Lang, M., Kotthoff, L., Schiffner, J., Richter, J., Studerus, E., Casalicchio, G., & Jones, Z. M. (2016). mlr: Machine Learning in R. The Journal of Machine Learning Research, 17(1), 5938–5942.
    link | pdf
  4. Bauer, N., Friedrichs, K., Bischl, B., & Weihs, C. (2016, August 4). Fast Model Based Optimization of Tone Onset Detection by Instance Sampling. Data Analysis, Machine Learning and Knowledge Discovery.
    link
  5. Weihs, C., Horn, D., & Bischl, B. (2016). Big data Classification: Aspects on Many Features and Many Observations. In A. F. X. Wilhelm & H. A. Kestler (Eds.), Analysis of Large and Complex Data (pp. 113–122). Springer International Publishing. https://doi.org/10.1007/978-3-319-25226-1_10
  6. Bischl, B., Kerschke, P., Kotthoff, L., Lindauer, M., Malitsky, Y., Frechétte, A., Hoos, H., Hutter, F., Leyton-Brown, K., Tierney, K., & Vanschoren, J. (2016). ASlib: A Benchmark Library for Algorithm Selection. Artificial Intelligence, 237, 41–58.
    link
  7. Bischl, B., Kühn, T., & Szepannek, G. (2016). On Class Imbalance Correction for Classification Algorithms in Credit Scoring. In Operations Research Proceedings 2014 (pp. 37–43). Springer International Publishing.
    link|pdf
  8. Demircioglu, A., Horn, D., Glasmachers, T., Bischl, B., & Weihs, C. (2016). Fast model selection by limiting SVM training times (Number arxiv:1302.1602.03368v1). arxiv.org.
    link
  9. Casalicchio, G., Bischl, B., Boulesteix, A.-L., & Schmid, M. (2015). The residual-based predictiveness curve: A visual tool to assess the performance of prediction models. Biometrics, 72(2), 392–401.
    link | pdf
  10. Degroote, H., Bischl, B., Kotthoff, L., & De Causmaecker, P. (2016). Reinforcement Learning for Automatic Online Algorithm Selection - an Empirical Study. ITAT 2016 Proceedings, 1649, 93–101.
    link
  11. Feilke, M., Bischl, B., Schmid, V. J., & Gertheiss, J. (2016). Boosting in nonlinear regression models with an application to DCE-MRI data. Methods of Information in Medicine, 55(01), 31–41.
  12. Beggel, L., Kausler, B. X., Schiegg, M., & Bischl, B. (2016). Anomaly Detection with Shapelet-Based Feature Learning for Time Series. Ulmer Informatik-Berichte, 25.
    link | pdf
  13. Rietzler, M., Geiselhart, F., Thomas, J., & Rukzio, E. (2016). FusionKit: a generic toolkit for skeleton, marker and rigid-body tracking. Proceedings of the 8th ACM SIGCHI Symposium on Engineering Interactive Computing Systems, 73–84.
    link
  14. Schiffner, J., Bischl, B., Lang, M., Richter, J., Jones, Z. M., Probst, P., Pfisterer, F., Gallo, M., Kirchhoff, D., Kühn, T., Thomas, J., & Kotthoff, L. (2016). mlr Tutorial.
    link | pdf
  15. Degroote, H., Bischl, B., Kotthoff, L., & Causmaecker, P. D. (2016). Reinforcement Learning for Automatic Online Algorithm Selection - an Empirical Study. Proceedings of the 16th ITAT Conference Information Technologies - Applications and Theory, Tatranské Matliare, Slovakia, September 15-19, 2016., 93–101.
    link
  16. Feilke, M., Bischl, B., Schmid, V. J., & Gertheiss, J. (2016). Boosting in non-linear regression models with an application to DCE-MRI data. Methods of Information in Medicine.
    link|pdf

2015

  1. Vanschoren, J., Rijn, J. N., & Bischl, B. (2015). Taking machine learning research online with OpenML. In W. Fan, A. Bifet, Q. Yang, & P. S. Yu (Eds.), Proceedings of the 4th International Workshop on Big Data, Streams and Heterogeneous Source Mining: Algorithms, Systems, Programming Models and Applications (Vol. 41, pp. 1–4). PMLR.
    link|pdf
  2. Mantovani, R. G., Rossi, A. L. D., Vanschoren, J., Bischl, B., & Carvalho, A. C. P. L. F. (2015). To tune or not to tune: Recommending when to adjust SVM hyper-parameters via meta-learning. 2015 International Joint Conference on Neural Networks (IJCNN), 1–8. https://doi.org/10.1109/IJCNN.2015.7280644
  3. Mantovani, R. G., Rossi, A. L. D., Vanschoren, J., Bischl, B., & de Carvalho, A. C. P. L. F. (2015). Effectiveness of Random Search in SVM hyper-parameter tuning. 2015 International Joint Conference on Neural Networks (IJCNN), 1–8. https://doi.org/10.1109/IJCNN.2015.7280664
  4. Bossek, J., Bischl, B., Wagner, T., & Rudolph, G. (2015). Learning feature-parameter mappings for parameter tuning via the profile expected improvement. Proceedings of the 2015 Annual Conference on Genetic And Evolutionary Computation, 1319–1326.
    link|pdf
  5. Brockhoff, D., Bischl, B., & Wagner, T. (2015). The Impact of Initial Designs on the Performance of MATSuMoTo on the Noiseless BBOB-2015 Testbed: A Preliminary Study. Proceedings of the Companion Publication of the 2015 Annual Conference on Genetic and Evolutionary Computation, 1159–1166. https://doi.org/10.1145/2739482.2768470
  6. Horn, D., Wagner, T., Biermann, D., Weihs, C., & Bischl, B. (2015). Model-Based Multi-Objective Optimization: Taxonomy, Multi-Point Proposal, Toolbox and Benchmark. In A. Gaspar-Cunha, C. Henggeler Antunes, & C. C. Coello (Eds.), Evolutionary Multi-Criterion Optimization (EMO) (Vol. 9018, pp. 64–78). Springer. https://doi.org/10.1007/978-3-319-15934-8_5
  7. Casalicchio, G., Tutz, G., & Schauberger, G. (2015). Subject-specific Bradley–Terry–Luce models with implicit variable selection. Statistical Modelling, 15(6), 526–547.
    link | pdf
  8. Kotthaus, H., Korb, I., Lang, M., Bischl, B., Rahnenführer, J., & Marwedel, P. (2015). Runtime and memory consumption analyses for machine learning R programs. Journal of Statistical Computation and Simulation, 85(1), 14–29. https://doi.org/10.1080/00949655.2014.925192
  9. Lang, M., Kotthaus, H., Marwedel, P., Weihs, C., Rahnenführer, J., & Bischl, B. (2015). Automatic model selection for high-dimensional survival analysis. Journal of Statistical Computation and Simulation, 85(1), 62–76. https://doi.org/10.1080/00949655.2014.929131
  10. Bischl, B. (2015). Applying Model-Based Optimization to Hyperparameter Optimization in Machine Learning. Proceedings of the 2015 International Conference on Meta-Learning and Algorithm Selection - Volume 1455, 1.
    link|pdf
  11. Vanschoren, J., van Rijn, J. N., Bischl, B., & Casalicchio, G. (2015). OpenML: a networked science platform for machine learning. 2015 ICML Workshop on Machine Learning Open Source Software (MLOSS 2015), 1–3.
    link | pdf
  12. Bischl, B., Kühn, T., & Szepannek, G. (2015). On Class Imbalancy Correction for Classification Algorithms in Credit Scoring. In Operations Research Proceedings 2014. Springer.
    link
  13. Bischl, B., Lang, M., Mersmann, O., Rahnenführer, J., & Weihs, C. (2015). BatchJobs and BatchExperiments: Abstraction Mechanisms for Using R in Batch Environments. Journal of Statistical Software, 64(11), 1–25.
    link
  14. Mersmann, O., Preuss, M., Trautmann, H., Bischl, B., & Weihs, C. (2015). Analyzing the BBOB Results by Means of Benchmarking Concepts. Evolutionary Computation Journal, 23(1), 161–185. https://doi.org/doi:10.1162/EVCO_a_00134
  15. Vanschoren, J., Bischl, B., Hutter, F., Sebag, M., Kegl, B., Schmid, M., Napolitano, G., & Wolstencroft, K. (2015). Towards a data science collaboratory. Lecture Notes in Computer Science (IDA 2015), 9385.
    pdf

2014

  1. Bischl, B., Schiffner, J., & Weihs, C. (2014). Benchmarking Classification Algorithms on High-Performance Computing Clusters. In M. Spiliopoulou, L. Schmidt-Thieme, & R. Janning (Eds.), Data Analysis, Machine Learning and Knowledge Discovery (pp. 23–31). Springer. https://doi.org/10.1007/978-3-319-01595-8_3
  2. Bischl, B., Wessing, S., Bauer, N., Friedrichs, K., & Weihs, C. (2014). MOI-MBO: Multiobjective Infill for Parallel Model-Based Optimization. In P. M. Pardalos, M. G. C. Resende, C. Vogiatzis, & J. L. Walteros (Eds.), Learning and Intelligent Optimization (pp. 173–186). Springer. https://doi.org/10.1007/978-3-319-09584-4_17
  3. Kerschke, P., Preuss, M., Hernández, C., Schütze, O., Sun, J.-Q., Grimme, C., Rudolph, G., Bischl, B., & Trautmann, H. (2014). Cell Mapping Techniques for Exploratory Landscape Analysis. Proceedings of the EVOLVE 2014: A Bridge Between Probability, Set Oriented Numerics, and Evolutionary Computation, 115–131.
    link | pdf
  4. Meyer, O., Bischl, B., & Weihs, C. (2014). Support Vector Machines on Large Data Sets: Simple Parallel Approaches. In M. Spiliopoulou, L. Schmidt-Thieme, & R. Janning (Eds.), Data Analysis, Machine Learning and Knowledge Discovery (pp. 87–95). Springer. https://doi.org/10.1007/978-3-319-01595-8_10
  5. Vatolkin, I., Bischl, B., Rudolph, G., & Weihs, C. (2014). Statistical Comparison of Classifiers for Multi-objective Feature Selection in Instrument Recognition. In M. Spiliopoulou, L. Schmidt-Thieme, & R. Janning (Eds.), Data Analysis, Machine Learning and Knowledge Discovery (pp. 171–178). Springer. https://doi.org/10.1007/978-3-319-01595-8_19
  6. Vanschoren, J., van Rijn, J. N., Bischl, B., & Torgo, L. (2014). OpenML: Networked Science in Machine Learning. SIGKDD Explor. Newsl., 15(2), 49–60. https://doi.org/10.1145/2641190.2641198

2013

  1. Hess, S., Wagner, T., & Bischl, B. (2013). PROGRESS: Progressive Reinforcement-Learning-Based Surrogate Selection. In G. Nicosia & P. Pardalos (Eds.), Learning and Intelligent Optimization (pp. 110–124). Springer. https://doi.org/10.1007/978-3-642-44973-4_13
  2. Mersmann, O., Bischl, B., Trautmann, H., Wagner, M., Bossek, J., & Neumann, F. (2013). A novel feature-based approach to characterize algorithm performance for the traveling salesperson problem. Annals of Mathematics and Artificial Intelligence, March, 1–32. https://doi.org/10.1007/s10472-013-9341-2
  3. van Rijn, J., Bischl, B., Torgo, L., Gao, G., Umaashankar, V., Fischer, S., Winter, P., Wiswedel, B., Berthold, M. R., & Vanschoren, J. (2013). OpenML: A Collaborative Science Platform. Machine Learning and Knowledge Discovery in Databases, 645–649. https://doi.org/10.1007/978-3-642-40994-3_46
  4. Bischl, B., Schiffner, J., & Weihs, C. (2013). Benchmarking local classification methods. Computational Statistics, 28(6), 2599–2619. https://doi.org/10.1007/s00180-013-0420-y
  5. Bergmann, S., Ziegler, N., Bartels, T., Hübel, J., Schumacher, C., Rauch, E., Brandl, S., Bender, A., Casalicchio, G., Krautwald-Junghanns, M.-E., & others. (2013). Prevalence and severity of foot pad alterations in German turkey poults during the early rearing phase. Poultry Science, 92(5), 1171–1176.
    link | pdf
  6. Nallaperuma, S., Wagner, M., Neumann, F., Bischl, B., Mersmann, O., & Trautmann, H. (2013, January 16). A Feature-Based Comparison of Local Search and the Christofides Algorithm for the Travelling Salesperson Problem. Foundations of Genetic Algorithms (FOGA). https://doi.org/10.1145/2460239.2460253
  7. Ziegler, N., Bergmann, S., Huebei, J., Bartels, T., Schumacher, C., Bender, A., Casalicchio, G., Kuechenhoff, H., Krautwald-Junghanns, M.-E., & Erhard, M. (2013). Climate parameters and the influence on the foot pad health status of fattening turkeys BUT 6 during the early rearing phase. BERLINER UND MUNCHENER TIERARZTLICHE WOCHENSCHRIFT, 126(5-6), 181–188.
    link | pdf
  8. van Rijn, J., Umaashankar, V., Fischer, S., Bischl, B., Torgo, L., Gao, B., Winter, P., Wiswedel, B., Berthold, M. R., & Vanschoren, J. (2013). A RapidMiner extension for Open Machine Learning. RapidMiner Community Meeting and Conference (RCOMM), 59–70.
    link | pdf

2012

  1. Nallaperuma, S., Wagner, M., Neumann, F., Bischl, B., Mersmann, O., & Trautmann, H. (2012). Features of Easy and Hard Instances for Approximation Algorithms and the Traveling Salesperson Problem. Citeseer.
    link | pdf
  2. Bischl, B., Mersmann, O., Trautmann, H., & Preuss, M. (2012). Algorithm Selection Based on Exploratory Landscape Analysis and Cost-Sensitive Learning. Proceedings of the 14th Annual Conference on Genetic and Evolutionary Computation, 313–320. https://doi.org/10.1145/2330163.2330209
  3. Koch, P., Bischl, B., Flasch, O., Bartz-Beielstein, T., Weihs, C., & Konen, W. (2012). Tuning and evolution of support vector kernels. Evolutionary Intelligence, 5(3), 153–170. https://doi.org/10.1007/s12065-012-0073-8
  4. Mersmann, O., Bischl, B., Bossek, J., Trautmann, H., M., W., & Neumann, F. (2012). Local Search and the Traveling Salesman Problem: A Feature-Based Characterization of Problem Hardness. Learning and Intelligent Optimization Conference (LION), 115–129. https://doi.org/10.1007/978-3-642-34413-8_9
  5. Schiffner, J., Bischl, B., & Weihs, C. (2012). Bias-variance analysis of local classification methods. In W. Gaul, A. Geyer-Schulz, L. Schmidt-Thieme, & J. Kunze (Eds.), Challenges at the Interface of Data Analysis, Computer Science, and Optimization (pp. 49–57). Springer. https://doi.org/10.1007/978-3-642-24466-7_6
  6. Weihs, C., O., M., Bischl, B., Fritsch, A., Trautmann, H., Karbach, T.-M., & Spaan, B. (2012). A Case Study on the Use of Statistical Classification Methods in Particle Physics. Challenges at the Interface of Data Analysis, Computer Science, and Optimization, 69–77.
    link
  7. Bischl, B., Mersmann, O., Trautmann, H., & Weihs, C. (2012). Resampling Methods for Meta-Model Validation with Recommendations for Evolutionary Computation. Evolutionary Computation, 20(2), 249–275. https://doi.org/10.1162/EVCO_a_00069
  8. Bischl, B., Lang, M., Mersmann, O., Rahnenfuehrer, J., & Weihs, C. (2012). Computing on high performance clusters with R: Packages BatchJobs and BatchExperiments. SFB 876, TU Dortmund University.
    link

2011

  1. Mersmann, O., Bischl, B., Trautmann, H., Preuss, M., Weihs, C., & Rudolph, G. (2011). Exploratory Landscape Analysis. In N. Krasnogor (Ed.), Proceedings of the 13th annual conference on genetic and evolutionary computation (GECCO ’11) (pp. 829–836). Association for Computing Machinery. https://doi.org/10.1145/2001576.2001690
  2. Blume, H., Bischl, B., Botteck, M., Igel, C., Martin, R., Roetter, G., Rudolph, G., Theimer, W., Vatolkin, I., & Weihs, C. (2011). . IEEE Signal Processing Magazine, 28(4), 24–39. https://doi.org/10.1109/MSP.2011.940880
  3. Weihs, C., Friedrichs, K., & Bischl, B. (2011). Statistics for hearing aids: Auralization. Second Bilateral German-Polish Symposium on Data Analysis and Its Applications (GPSDAA).
  4. Koch, P., Bischl, B., Flasch, O., Bartz-Beielstein, T., & Konen, W. (2011). On the Tuning and Evolution of Support Vector Kernels. Research Center CIOP (Computational Intelligence, Optimization and Data Mining).
    link

2010

  1. Bischl, B., Vatolkin, I., & Preuss, M. (2010). Selecting Small Audio Feature Sets in Music Classification by Means of Asymmetric Mutation. Parallel Problem Solving from Nature, PPSN XI, 6238, 314–323.
    link
  2. Szepannek, G., Gruhne, M., Bischl, B., Krey, S., Harczos, T., Klefenz, F., Dittmar, C., & Weihs, C. (201AD). Perceptually Based Phoneme Recognition in Popular Music. In H. Locarek-Junge & C. Weihs (Eds.), Classification as a Tool for Research (Vol. 40, pp. 751–758). Springer. https://doi.org/10.1007/978-3-642-10745-0_83
  3. Bischl, B., Mersmann, O., & Trautmann, H. (2010). Resampling Methods in Model Validation. In T. Bartz-Beielstein, M. Chiarandini, L. Paquete, & M. Preuss (Eds.), WEMACS – Proceedings of the Workshop on Experimental Methods for the Assessment of Computational Systems, Technical Report TR 10-2-007. Department of Computer Science, TU Dortmund University.
    link
  4. Bischl, B., Eichhoff, M., & Weihs, C. (2010). Selecting Groups of Audio Features by Statistical Tests and the Group Lasso. 9. ITG Fachtagung Sprachkommunikation.
    link

2009

  1. Bischl, B., Ligges, U., & Weihs, C. (6AD). Frequency estimation by DFT interpolation: A comparison of methods. SFB 475, Faculty of Statistics, TU Dortmund, Germany.
    link | pdf
  2. Szepannek, G., Bischl, B., & Weihs, C. (2009). On the combination of locally optimal pairwise classifiers. Engineering Applications of Artificial Intelligence, 22(1), 79–85. https://doi.org/https://doi.org/10.1016/j.engappai.2008.04.009

2008

  1. Szepannek, G., Bischl, B., & Weihs, C. (2008). On the Combination of Locally Optimal Pairwise Classifiers. Journal of Engineering Applications of Artificial Intelligence, 22(1), 79–85.
    link

2007