Theses

The chair typically offers various thesis topics each semester in the areas computational statistics, machine learning, data mining, optimization and statistical software. You are welcome to suggest your own topic as well.

Before you apply for a thesis topic make sure that you fit the following profile:

Before you start writing your thesis you must look for a supervisor within the group.

Send an email to the contact person listed in the potential theses topics files with the following information:

Your application will only be processed if it contains all required information.


Potential Thesis Topics

[Potential Thesis Topics] [Student Research Projects] [Current Theses] [Completed Theses]

Below is a list of potential thesis topics. Before you start writing your thesis you must look for a supervisor within the group.

Available theses

Deep Efficient Transformers for Learning Representation of Genomic Sequences

Deep learning (DL) algorithms have shown promising performance when used in biological research and have been successfully used to tackle computational biology problems [1, 2]. Genomics is one of the biggest contributors to large amounts of data in this field because of advancements in sequencing technologies [3]. DL is especially well-suited to tackle learning tasks within genomics, as the data – a string of nucleotides represented by the letters A, C, T, G has a structure and grammar that is different but analogous to the ones found in Natural Language Processing (NLP)[5]. Deep transformers are a standard model in NLP. However, transformers have not yet seen such widespread adoption in bioinformatics and remain an essential and challenging endeavor for the representation of genome data. The main reason is adapting transformers to genomics tasks is challenging because of the computational requirements of self-attention. Recently, the Enformer architecture [4] introduced by Deepmind showed how transformers can be useful in capturing long-range dependencies in genome sequences of lengths up to 196kb.

Inspired by the Enformer, we would like to come up with a simpler form of their architecture based on efficient transformers with the hope of learning models that are more accessible for further research as the normal transformers are expensive to train. With the added efficiency, it could also potentially be trained on even longer sequences. In addition, since relative positional encoding (RPE) is a big part of the performance improvement with transformers [6, 7], we would like to conduct experiments using recently developed RPE methods for efficient transformers[8, 9] and see what improvements we observe.

For more details please see this document


Self-supervised Learning Method for Imbalanced Positive Unlabeled Data

Learning from positive and unlabeled data or PU learning is the setting where a learner only has access to positive examples and unlabeled data. The assumption is that the unlabeled data can contain both positive and negative examples [3]. SimCLR is a framework for contrastive learning of visual representations. It learns representations by maximizing agreement between differently augmented views of the same data example via a contrastive loss in the latent space [2].

Here, we offer a research topic as a master thesis addressing the problem of positive unlabeled learning with self-supervised /unsupervised learning procedure, and the key research questions about PU learning according to PU survey paper [3] can be formulated rather straightforwardly as

  1. How can we formalize the problem of learning from PU data?
  2. What assumptions are typically made about PU data in order to facilitate the design of learning algorithms?
  3. Can we estimate the class prior from PU data and why is this useful?
  4. How can we learn a model from PU data?
  5. How can we evaluate models in a PU setting?
  6. When and why does PU data arise in real-world applications?
  7. How does PU learning relate to other areas of machine learning?

For more details please see this document


Deep Self-Supervised Divergence Learning for Multi-Modal Representation Learning

In this master thesis, we plan to learn representations from multimodal dataset in a self-supervised manner without resorting to any specific manual annotation. Our goals are two folds: First, we aim to learn representations using contrastive Bregman divergence loss on top of embeddings of text data and audio. Second, we learn a deep f1 divergence by minimizing divergence between multi-modal samples from the same distribution and maximizing divergence for multi-modal samples from different classes and various distributions.

For more details please see this document


Uncertainty-Aware Self-Supervised Transformer Model for Medical Image Analysis

Self-supervised transformers are state-of-the-art in a wide range of tasks in NLP, computer vision, and medical image analysis recently. Estimating the uncertainty of self-supervised model predictions is critical for building trustable machine learning developments in healthcare applications and medical diagnosis. In this master thesis, we aim to approach a novel Bayesian approach to quantify the uncertainty of predictions by the vision transformer for medical image analysis.

For more details please see this document


Gaussian Process Regression and Bayesian Deep Learning for Insurance Tariff Migration

Many life insurers see the modernization of their IT systems as the most important task in order to cope with the challenging economic conditions, growing customer requirements and the establishment of new digital business models. Modernization often also requires the semi-manual migration of large policy portfolios with complex actuarial functions. Actuarial functionalities are to be transferred automatically with the help of machine learning and connected to a modern policy administration system (PAS). The topics of Interpretable Machine Learning (IML) and Automated Machine Learning (AutoML) play an important role.

This master thesis can be combined with a working student opportunity as msg life.

For more details please see this document


Nutrition and time-to-event outcomes in critically ill patients: Multi-state modeling with cumulative effects

Critically ill patients require mechanical ventilation and artificial nutrition. Different guidelines suggest different amounts of calorie and protein intake and clinical and observational studies that attempted to estimate the association between nutritional intake and time-to-event outcomes received criticism and showed contradicting results. One difficulty in the analysis of such data is the time-dependent nature of nutritional intake. Previous studies developed methods to estimate time-varying, cumulative effects of nutrition. In this Master level thesis, you are asked to extend the current modeling approach in order to model different stages of the hospital and ICU stay more realistically and to extend existing software to support these extensions.

For more details please see this Google document


Empirical Evaluation of Methods for Discrete Time-to-event Analysis

Time-to-event analysis (or survival analysis) is a subfield of statistics that is concerned with outcomes that measure time until occurrence of an event. Modeling such outcomes requires specialized techniques in order to take into account censoring and/or truncation. However, it is also possible to cast survival tasks into classification or binary regression tasks by discretizing the follow-up into intervals and representing survival by a binary 0/1 variable (“no event” vs. “event”) in each of the intervals. Such models are referred to as discrete time-to-event models and can be particularly useful when the time-to-event information is discrete or has been discretized. The objective of this Bachelor level thesis is to quantitatively evaluate the performance of methods for discrete time-to-event analysis and compare them to continuos time-to-event models.

For more details please see this Google document


Interpretation of Black Box Models with Advanced Surrogate Models

Due to strict regulatory requirements regarding model interpretability, generalized linear models (GLMs) are often deployed in insurance and other fields. However, compared to these inherently interpretable models, machine learning (ML) models have been claimed to perform better in prediction tasks. Unfortunately, ML models usually do not provide a closed-form formula for their predictions like GLMs do. This lack of interpretability makes it difficult to implement and deploy ML models. The aim of this thesis is to develop flexible surrogate models that are based on a tree-based partitioning and can generate comprehensible closed-form formulas within subgroups of observations (e.g. by fitting GLMs or symbolic regression models in the leaf nodes).

For more details please see this Google document


Laplace Approximation for Uncertainty Quantification in Positive Unlabeled Learning

Reliable uncertainty quantification methods in positive unlabeled learning are sorely needed. The Laplace approximation is a simple, yet powerful approximation to the posterior of a neural network that is easy and quick to compute. Unfortunately, it requires both positive and negative labels to be computed, therefore it cannot be applied directly to models trained on PU data. Hence, the topic of this Master’s thesis is to find a way to compute the Laplace approximation in PU learning, including the implementation and experimental comparison of with other uncertainty quantification methods.

For more details please see this Google document


Knowledge distillation for proteasomal cleavage prediction in a vaccine design framework

The objective of the thesis is to apply techniques from knowledge distillation to derive a student model from an available cleavage predictor (binary classification on sequence data) that is suitable to be embedded in a vaccine design framework based on mixed-integer linear programming and with better performance than the linear predictor currently employed.

For more details please see this Google document


Learning Set and Irregular Data using Deep Meta-learning

Meta-learning, also known as “learning to learn”, intends to design models that can learn new skills or adapt to new environments rapidly with a few training examples. There are three common approaches: 1) metric-based: learn an efficient distance metric; 2) model-based: use (recurrent) network with external or internal memory; 3) optimization-based: optimize the model parameters explicitly for fast learning.

A typical machine/deep learning algorithm, like regression or classification, is designed for fixed dimensional data instances. Their extensions to handle the case when the inputs or outputs are permutation invariant sets rather than fixed dimensional vectors are not trivial. However, most available data are unstructured and set such as point cloud, video/ audio clip, time-series data. Learning representation of unstructured data is very valuable in many domains such as health care and has a high impact on machine, deep learning research areas.

Here, we offer different research topics as master thesis, for more information please visit Google doc.


Deep Reinforcement Learning

Deep reinforcement learning combines deep neural networks with a reinforcement learning structure that enables agents to learn the best actions possible in a virtual environment in order to attain their goals. That is, it unites function approximation and target optimization, mapping state-action pairs to expected rewards.

Reinforcement learning refers to goal-oriented algorithms, which learn how to attain a complex objective (goal) or how to maximize a particular dimension over many steps; for example, they can maximize the points won in a game over many moves. DRL algorithms can start from a blank slate, and under the right conditions, they achieve superhuman performance.

Here, we offer different research topics as master thesis including:

1) Investigating DRL with other ML algorithms such as meta-learning, life-long learning, active-learning, generative model, Bayesian deep learning. 2) Systematically study and benchmarking the value function, state and reward function, actions, and different hyperparameters 3) Study and benchmarking the policy evaluation techniques 4) Investigating the application of DRL in computer vision or NLP. For details please refer to the Google doc.


Analysis of Overfitting Effects During Model Selection

In this master thesis you will analyze, whether model selection/hyperparameter tuning can suffer from overfitting. For details refer to the Google doc.


Hierarchical word embeddings using hyperbolic neural networks

In this master thesis you will investigate recent advancements in Geometric Deep Learning with a specific focus on Natural Language Processing. Geometric deep learning operates in spaces with non-zero curvatures. Fields of application include computer vision (three-dimensional objects), genetics, neuroscience or virtually any kind of graph data. Especially hyperbolic deep learning is currently of interest to the machine learning community. These models are particularly promising when there are hierarchical structures in the data. For details refer to the Google Doc.


Analyzing the Permutation Feature Importance

The permutation feature importance (PFI) assesses the importance of features by computing the drop in out-of-bag performance after permuting a considered feature. It was initially introduced for random forests, although the main idea can also be performed in a model-agnostic fashion. However, many properties are not well-studied and several research questions are still open. The aim of this thesis is to study the model-agnostic PFI and conduct empirical studies to answer some of these unanswered research questions.

For details refer to the Google Doc.


Uncertainty Estimation in Deep Learning-based Hybrid Localization

This topic is in collaboration with the Fraunhofer IIS, Nürnberg, and aims to develop a robot system that can (self-)localize in large-scale indoor environments. The robot platform is equipped with stereo and monocular cameras, IMUs and UWB transmitter. The goal is to predict an accurate pose of the robot from the sensors with deep learning based models, and use uncertainty quantification to estimate the uncertainty in the data and the models. The main part will be to estimate the epistemic and aleatoric uncertainty, e.g., using SWAG or Monte Carlo dropout. These uncertainties can be used to improve the pose accuracy.

For more details refer to the Google Doc


Wildlife Image Classification - Detection of Outliers

Please see the Proposal


Disputation

Procedure

The disputation of a thesis lasts about 60-90 minutes and consists of two parts. Only the first part is relevant for the grade and takes 30 minutes (bachelor thesis) and 40 minutes (master thesis). Here, the student is expected to summarize his/her main results of the thesis in a presentation. The supervisor(s) will ask questions regarding the content of the thesis in between. In the second part (after the presentation), the supervisors will give detailed feedback and discuss the thesis with the student. This will take about 30 minutes.

FAQ

You have to prepare a presentation and if there is a bigger time gap between handing in your thesis and the disputation you might want to reread your thesis.

That’s up to you, but you have to respect the time limit. Prepariong more than 20 slides for a Bachelor’s presentation and more than 30 slides for a Master’s is VERY likely a very bad idea.

Bernd’s office, in front of the big TV. At least one PhD will be present, maybe more. If you want to present in front of a larger audience in the seminar room or the old library, please book the room yourself and inform us.

We do not care, you can choose.

A document (Prüfungsprotokoll) which you get from “Prüfungsamt” (Frau Maxa or Frau Höfner) for the disputation.Your laptop or a USB stick with the presentation. You can also email Bernd a PDF.

The student will be graded regarding the quality of the thesis, the presentation and the oral discussion of the work. The grade is mainly determined by the written thesis itself, but the grade can improve or drop depending on the presentation and your answers to defense questions.

The presentation should cover your thesis, including motivation, introduction, description of new methods and results of your research. Please do NOT explain already existing methods in detail here, put more focus on novel work and the results.

The questions will be directly connected to your thesis and related theory.


Student Research Projects

[Potential Thesis Topics] [Student Research Projects] [Current Theses] [Completed Theses]

We are always interested in mentoring interesting student research projects. Please contact us directly with an interesting resarch idea. In the future you will also be able to find research project topics below.

Available projects

Currently we are not offering any student research projects.

For more information please visit the official web page Studentische Forschungsprojekte (Lehre@LMU)


Current Theses (With Working Titles)

[Potential Thesis Topics] [Student Research Projects] [Current Theses] [Completed Theses]

Title Type
Data-driven Lag-lead Selection for Exposure-Lag-Response Associations BA
Probabilistic Deep Learning of Liver Failure in Therapeutical Cancer Treatment MA
Model agnostic Feature Importance by Loss Measures MA
Model-agnostic interpretable machine learning methods for multivariate MA
time series forecasting  
Normalizing Flows for Interpretablity Measures MA
Representation Learning for Semi-Supervised Genome Sequence Classification MA
Neural Architecture Search for Genomic Sequence Data MA
Comparison of Machine Learning Models For Competing Risks Survival Analysis MA
Multi-accuracy calibration for survival models MA

Completed Theses

[Potential Thesis Topics] [Student Research Projects] [Current Theses] [Completed Theses]

Completed Theses (LMU Munich)

Title Type Completed
Multi-state modeling in the context of predictive maintenance MA 2021
Model Based Quality Diversity Optimization MA 2021
mlr3automl - Automated Machine Learning in R MA 2021
Knowledge destillation - Compressing arbitrary learners into a neural net MA 2020
Personality Prediction Based on Mobile Gaze and Touch Data MA 2020
Identifying Subgroups induced by Interaction Effects MA 2020
Benchmarking: Tests and Vizualisations MA 2019
Counterfactual Explanations MA 2019
Methodik, Anwendungen und Interpretation moderner Benchmark-Studien am Beispiel der MA 2019
Risikomodellierung bei akuter Cholangitis    
Machine Learning pipeline search with Bayesian Optimization and Reinforcement Learning MA 2019
Visualization and Efficient Replay Memory for Reinforcement Learning BA 2019
Neural Network Embeddings for Categorical Data BA 2019
Localizing phosphorylation sites by deep learning-based fragment ion intensity MA 2019
Average Marginal Effects in Machine Learning MA 2019
Wearable-based Severity Detection in the Context of Parkinson’s Disease Using MA 2018
Deep Learning Techniques    
Bayesian Optimization under Noise for Model Selection in Machine Learning MA 2018
Interpretable Machine Learning - An Application Study using the Munich Rent Index MA 2018
Automatic Gradient Boosting MA 2018
Efficient and Distributed Model-Based Boosting for Large Datasets MA 2018
Linear individual model-agnostic explanations - discussion and empirical analysis of modifications MA 2018
Extending Hyperband with Model-Based Sampling Strategies MA 2018
Reinforcement learning in R MA 2018
Anomaly Detection using Machine Learning Methods MA 2018
RNN Bandmatrix MA 2018
Configuration of deep neural networks using model-based optimization MA 2017
Kernelized anomaly detection MA 2017
Automatic model selection amd hyperparameter optimization MA 2017
mlrMBO / RF distance based infill criteria MA 2017
Kostensensitive Entscheidungsbäume für beobachtungsabhängige Kosten BA 2016
Implementation of 3D Model Visualization for Machine Learning BA 2016
Eine Simulationsstudie zum Sampled Boosting BA 2016
Implementation and Comparison of Stacking Methods for Machine Learning MA 2016
Runtime estimation of ML models BA 2016
Process Mining: Checking Methods for Process Conformance MA 2016
Implementation of Multilabel Algorithms and their Application on Driving Data MA 2016
Stability Selection for Component-Wise Gradient Boosting in Multiple Dimensions MA 2016
Detecting Future Equipment Failures: Predictive Maintenance in Chemical Industrial Plants MA 2016
Fault Detection for Fire Alarm Systems based on Sensor Data MA 2016
Laufzeitanalyse von Klassifikationsverfahren in R BA 2015
Benchmark Analysis for Machine Learning in R BA 2015
Implementierung und Evaluation ergänzender Korrekturmethoden für statistische Lernverfahren BA 2014
bei unbalancierten Klassifikationsproblemen    

Completed Theses (Supervised by Bernd Bischl at TU Dortmund)

Title Type Completed
Anwendung von Multilabel-Klassifikationsverfahren auf Medizingerätestatusreporte zur Generierung von Reparaturvorschlägen MA 2015
Erweiterung der Plattform OpenML um Ereigniszeitanalysen MA 2015
Modellgestützte Algorithmenkonfiguration bei Feature-basierten Instanzen: Ein Ansatz über das Profile-Expected-Improvement Dipl. 2015
Modellbasierte Hyperparameteroptimierung für maschinelle Lernverfahren auf großen Daten MA 2015
Implementierung einer Testsuite für mehrkriterielle Optimierungsprobleme BA 2014
R-Pakete für Datenmanagement und -manipulation großer Datensätze BA 2014
Lokale Kriging-Verfahren zur Modellierung und Optimierung gemischter Parameterräume mit Abhängigkeitsstrukturen BA 2014
Kostensensitive Algorithmenselektion für stetige Black-Box-Optimierungsprobleme basierend auf explorativer Landschaftsanalyse MA 2013
Exploratory Landscape Analysis für mehrkriterielle Optimierungsprobleme MA 2013
Feature-based Algorithm Selection for the Traveling-Salesman-Problem BA 2013
Implementierung und Untersuchung einer parallelen Support Vector Machine in R Dipl. 2013
Sequential Model-Based Optimization by Ensembles: A Reinforcement Learning Based Approach Dipl. 2012
Vorhersage der Verkehrsdichte in Warschau basierend auf dem Traffic Simulation Framework BA 2011
Klassifikation von Blutgefäßen und Neuronen des menschlichen Gehirns anhand von ultramikroskopierten 3D-Bilddaten BA 2011
Uncertainty Sampling zur Auswahl optimaler Sampler aus der trunkierten Normalverteilung BA 2011
Over-/Undersampling für unbalancierte Klassifikationsprobleme im Zwei-Klassen-Fall BA 2010