The chair typically offers various thesis topics each semester in the areas computational statistics, machine learning, data mining, optimization and statistical software. You are welcome to suggest your own topic as well.

Before you apply for a thesis topic make sure that you fit the following profile:

Before you start writing your thesis you must look for a supervisor within the group.

Send an email to the contact person listed in the potential theses topics files with the following information:

Your application will only be processed if it contains all required information.

Potential Thesis Topics

[Potential Thesis Topics] [Student Research Projects] [Current Theses] [Completed Theses]

Below is a list of potential thesis topics. Before you start writing your thesis you must look for a supervisor within the group.

Available theses

Laplace Approximation for Uncertainty Quantification in Positive Unlabeled Learning

Reliable uncertainty quantification methods in positive unlabeled learning are sorely needed. The Laplace approximation is a simple, yet powerful approximation to the posterior of a neural network that is easy and quick to compute. Unfortunately, it requires both positive and negative labels to be computed, therefore it cannot be applied directly to models trained on PU data. Hence, the topic of this Master’s thesis is to find a way to compute the Laplace approximation in PU learning, including the implementation and experimental comparison of with other uncertainty quantification methods.

For more details please see this Google document

Knowledge distillation for proteasomal cleavage prediction in a vaccine design framework

The objective of the thesis is to apply techniques from knowledge distillation to derive a student model from an available cleavage predictor (binary classification on sequence data) that is suitable to be embedded in a vaccine design framework based on mixed-integer linear programming and with better performance than the linear predictor currently employed.

For more details please see this Google document

Learning Set and Irregular Data using Deep Meta-learning

Meta-learning, also known as “learning to learn”, intends to design models that can learn new skills or adapt to new environments rapidly with a few training examples. There are three common approaches: 1) metric-based: learn an efficient distance metric; 2) model-based: use (recurrent) network with external or internal memory; 3) optimization-based: optimize the model parameters explicitly for fast learning.

A typical machine/deep learning algorithm, like regression or classification, is designed for fixed dimensional data instances. Their extensions to handle the case when the inputs or outputs are permutation invariant sets rather than fixed dimensional vectors are not trivial. However, most available data are unstructured and set such as point cloud, video/ audio clip, time-series data. Learning representation of unstructured data is very valuable in many domains such as health care and has a high impact on machine, deep learning research areas.

Here, we offer different research topics as master thesis, for more information please visit Google doc.

Deep Reinforcement Learning

Deep reinforcement learning combines deep neural networks with a reinforcement learning structure that enables agents to learn the best actions possible in a virtual environment in order to attain their goals. That is, it unites function approximation and target optimization, mapping state-action pairs to expected rewards.

Reinforcement learning refers to goal-oriented algorithms, which learn how to attain a complex objective (goal) or how to maximize a particular dimension over many steps; for example, they can maximize the points won in a game over many moves. DRL algorithms can start from a blank slate, and under the right conditions, they achieve superhuman performance.

Here, we offer different research topics as master thesis including:

1) Investigating DRL with other ML algorithms such as meta-learning, life-long learning, active-learning, generative model, Bayesian deep learning. 2) Systematically study and benchmarking the value function, state and reward function, actions, and different hyperparameters 3) Study and benchmarking the policy evaluation techniques 4) Investigating the application of DRL in computer vision or NLP. For details please refer to the Google doc.

Analysis of Overfitting Effects During Model Selection

In this master thesis you will analyze, whether model selection/hyperparameter tuning can suffer from overfitting. For details refer to the Google doc.

Hierarchical word embeddings using hyperbolic neural networks

In this master thesis you will investigate recent advancements in Geometric Deep Learning with a specific focus on Natural Language Processing. Geometric deep learning operates in spaces with non-zero curvatures. Fields of application include computer vision (three-dimensional objects), genetics, neuroscience or virtually any kind of graph data. Especially hyperbolic deep learning is currently of interest to the machine learning community. These models are particularly promising when there are hierarchical structures in the data. For details refer to the Google Doc.

Analyzing the Permutation Feature Importance

The permutation feature importance (PFI) assesses the importance of features by computing the drop in out-of-bag performance after permuting a considered feature. It was initially introduced for random forests, although the main idea can also be performed in a model-agnostic fashion. However, many properties are not well-studied and several research questions are still open. The aim of this thesis is to study the model-agnostic PFI and conduct empirical studies to answer some of these unanswered research questions.

For details refer to the Google Doc.

Uncertainty Estimation in Deep Learning-based Hybrid Localization

This topic is in collaboration with the Fraunhofer IIS, Nürnberg, and aims to develop a robot system that can (self-)localize in large-scale indoor environments. The robot platform is equipped with stereo and monocular cameras, IMUs and UWB transmitter. The goal is to predict an accurate pose of the robot from the sensors with deep learning based models, and use uncertainty quantification to estimate the uncertainty in the data and the models. The main part will be to estimate the epistemic and aleatoric uncertainty, e.g., using SWAG or Monte Carlo dropout. These uncertainties can be used to improve the pose accuracy.

For more details refer to the Google Doc



The disputation of a thesis lasts about 60-90 minutes and consists of two parts. Only the first part is relevant for the grade and takes 30 minutes (bachelor thesis) and 40 minutes (master thesis). Here, the student is expected to summarize his/her main results of the thesis in a presentation. The supervisor(s) will ask questions regarding the content of the thesis in between. In the second part (after the presentation), the supervisors will give detailed feedback and discuss the thesis with the student. This will take about 30 minutes.


You have to prepare a presentation and if there is a bigger time gap between handing in your thesis and the disputation you might want to reread your thesis.

That’s up to you, but you have to respect the time limit. Prepariong more than 20 slides for a Bachelor’s presentation and more than 30 slides for a Master’s is VERY likely a very bad idea.

Bernd’s office, in front of the big TV. At least one PhD will be present, maybe more. If you want to present in front of a larger audience in the seminar room or the old library, please book the room yourself and inform us.

We do not care, you can choose.

A document (Prüfungsprotokoll) which you get from “Prüfungsamt” (Frau Maxa or Frau Höfner) for the disputation.Your laptop or a USB stick with the presentation. You can also email Bernd a PDF.

The student will be graded regarding the quality of the thesis, the presentation and the oral discussion of the work. The grade is mainly determined by the written thesis itself, but the grade can improve or drop depending on the presentation and your answers to defense questions.

The presentation should cover your thesis, including motivation, introduction, description of new methods and results of your research. Please do NOT explain already existing methods in detail here, put more focus on novel work and the results.

The questions will be directly connected to your thesis and related theory.

Student Research Projects

[Potential Thesis Topics] [Student Research Projects] [Current Theses] [Completed Theses]

We are always interested in mentoring interesting student research projects. Please contact us directly with an interesting resarch idea. In the future you will also be able to find research project topics below.

Available projects

Currently we are not offering any student research projects.

For more information please visit the official web page Studentische Forschungsprojekte (Lehre@LMU)

Current Theses (With Working Titles)

[Potential Thesis Topics] [Student Research Projects] [Current Theses] [Completed Theses]

Title Type
Probabilistic Deep Learning of Liver Failure in Therapeutical Cancer Treatment MA
Model agnostic Feature Importance by Loss Measures MA
Model-agnostic interpretable machine learning methods for multivariate MA
time series forecasting  
Normalizing Flows for Interpretablity Measures MA
Representation Learning for Semi-Supervised Genome Sequence Classification MA
Neural Architecture Search for Genomic Sequence Data MA
Comparison of Machine Learning Models For Competing Risks Survival Analysis MA
Multi-accuracy calibration for survival models MA
Multi-state modeling in the context of predictive maintanence MA

Completed Theses

[Potential Thesis Topics] [Student Research Projects] [Current Theses] [Completed Theses]

Completed Theses (LMU Munich)

Title Type Completed
Model Based Quality Diversity Optimization MA 2021
mlr3automl - Automated Machine Learning in R MA 2021
Knowledge destillation - Compressing arbitrary learners into a neural net MA 2020
Personality Prediction Based on Mobile Gaze and Touch Data MA 2020
Identifying Subgroups induced by Interaction Effects MA 2020
Benchmarking: Tests and Vizualisations MA 2019
Counterfactual Explanations MA 2019
Methodik, Anwendungen und Interpretation moderner Benchmark-Studien am Beispiel der MA 2019
Risikomodellierung bei akuter Cholangitis    
Machine Learning pipeline search with Bayesian Optimization and Reinforcement Learning MA 2019
Visualization and Efficient Replay Memory for Reinforcement Learning BA 2019
Neural Network Embeddings for Categorical Data BA 2019
Localizing phosphorylation sites by deep learning-based fragment ion intensity MA 2019
Average Marginal Effects in Machine Learning MA 2019
Wearable-based Severity Detection in the Context of Parkinson’s Disease Using MA 2018
Deep Learning Techniques    
Bayesian Optimization under Noise for Model Selection in Machine Learning MA 2018
Interpretable Machine Learning - An Application Study using the Munich Rent Index MA 2018
Automatic Gradient Boosting MA 2018
Efficient and Distributed Model-Based Boosting for Large Datasets MA 2018
Linear individual model-agnostic explanations - discussion and empirical analysis of modifications MA 2018
Extending Hyperband with Model-Based Sampling Strategies MA 2018
Reinforcement learning in R MA 2018
Anomaly Detection using Machine Learning Methods MA 2018
RNN Bandmatrix MA 2018
Configuration of deep neural networks using model-based optimization MA 2017
Kernelized anomaly detection MA 2017
Automatic model selection amd hyperparameter optimization MA 2017
mlrMBO / RF distance based infill criteria MA 2017
Kostensensitive Entscheidungsbäume für beobachtungsabhängige Kosten BA 2016
Implementation of 3D Model Visualization for Machine Learning BA 2016
Eine Simulationsstudie zum Sampled Boosting BA 2016
Implementation and Comparison of Stacking Methods for Machine Learning MA 2016
Runtime estimation of ML models BA 2016
Process Mining: Checking Methods for Process Conformance MA 2016
Implementation of Multilabel Algorithms and their Application on Driving Data MA 2016
Stability Selection for Component-Wise Gradient Boosting in Multiple Dimensions MA 2016
Detecting Future Equipment Failures: Predictive Maintenance in Chemical Industrial Plants MA 2016
Fault Detection for Fire Alarm Systems based on Sensor Data MA 2016
Laufzeitanalyse von Klassifikationsverfahren in R BA 2015
Benchmark Analysis for Machine Learning in R BA 2015
Implementierung und Evaluation ergänzender Korrekturmethoden für statistische Lernverfahren BA 2014
bei unbalancierten Klassifikationsproblemen    

Completed Theses (Supervised by Bernd Bischl at TU Dortmund)

Title Type Completed
Anwendung von Multilabel-Klassifikationsverfahren auf Medizingerätestatusreporte zur Generierung von Reparaturvorschlägen MA 2015
Erweiterung der Plattform OpenML um Ereigniszeitanalysen MA 2015
Modellgestützte Algorithmenkonfiguration bei Feature-basierten Instanzen: Ein Ansatz über das Profile-Expected-Improvement Dipl. 2015
Modellbasierte Hyperparameteroptimierung für maschinelle Lernverfahren auf großen Daten MA 2015
Implementierung einer Testsuite für mehrkriterielle Optimierungsprobleme BA 2014
R-Pakete für Datenmanagement und -manipulation großer Datensätze BA 2014
Lokale Kriging-Verfahren zur Modellierung und Optimierung gemischter Parameterräume mit Abhängigkeitsstrukturen BA 2014
Kostensensitive Algorithmenselektion für stetige Black-Box-Optimierungsprobleme basierend auf explorativer Landschaftsanalyse MA 2013
Exploratory Landscape Analysis für mehrkriterielle Optimierungsprobleme MA 2013
Feature-based Algorithm Selection for the Traveling-Salesman-Problem BA 2013
Implementierung und Untersuchung einer parallelen Support Vector Machine in R Dipl. 2013
Sequential Model-Based Optimization by Ensembles: A Reinforcement Learning Based Approach Dipl. 2012
Vorhersage der Verkehrsdichte in Warschau basierend auf dem Traffic Simulation Framework BA 2011
Klassifikation von Blutgefäßen und Neuronen des menschlichen Gehirns anhand von ultramikroskopierten 3D-Bilddaten BA 2011
Uncertainty Sampling zur Auswahl optimaler Sampler aus der trunkierten Normalverteilung BA 2011
Over-/Undersampling für unbalancierte Klassifikationsprobleme im Zwei-Klassen-Fall BA 2010