Theses

The chair typically offers various thesis topics each semester in the areas computational statistics, machine learning, data mining, optimization and statistical software. You are welcome to suggest your own topic as well.

Before you apply for a thesis topic make sure that you fit the following profile:

Before you start writing your thesis you must look for a supervisor within the group.

Send an email to juliane.lauks [at] stat.uni-muenchen.de with the following information:

Your application will only be processed if it contains all required information.


Potential Thesis Topics

[Potential Thesis Topics] [Student Research Projects] [Current Theses] [Completed Theses]

Below is a list of potential thesis topics. Before you start writing your thesis you must look for a supervisor within the group.

Available theses

Positive-unlabeled Learning on Text Data

Positive-unlabeled learning (PU-Learning) describes binary classification tasks in which the data has annotated positive labels but no negative labels. This is a special case of semi-supervised learning and is related to one-class-classification. It is a common problem, especially in tasks with many observations, where it is impossible to annotate all samples with labels and the focus lies on the positive labels. Within this thesis, PU Learning should be applied in a text classification setting on a data set provided by the Fraunhofer IMW, one of the two Fraunhofer Institutes involved in this project.

The master student is expected to familiarize with the different PU Learning approaches and apply the most promising on an internal dataset provided by the Fraunhofer IMW.

Take a look at this exposé for more information on this topic.

Learning Set and Irregular Data using Deep Meta-learning

Meta-learning, also known as “learning to learn”, intends to design models that can learn new skills or adapt to new environments rapidly with a few training examples. There are three common approaches: 1) metric-based: learn an efficient distance metric; 2) model-based: use (recurrent) network with external or internal memory; 3) optimization-based: optimize the model parameters explicitly for fast learning.

A typical machine/deep learning algorithm, like regression or classification, is designed for fixed dimensional data instances. Their extensions to handle the case when the inputs or outputs are permutation invariant sets rather than fixed dimensional vectors are not trivial. However, most available data are unstructured and set such as point cloud, video/ audio clip, time-series data. Learning representation of unstructured data is very valuable in many domains such as health care and has a high impact on machine, deep learning research areas.

Here, we offer different research topics as master thesis, for more information please visit Google doc.

Deep Reinforcement Learning

Deep reinforcement learning combines deep neural networks with a reinforcement learning structure that enables agents to learn the best actions possible in a virtual environment in order to attain their goals. That is, it unites function approximation and target optimization, mapping state-action pairs to expected rewards.

Reinforcement learning refers to goal-oriented algorithms, which learn how to attain a complex objective (goal) or how to maximize a particular dimension over many steps; for example, they can maximize the points won in a game over many moves. DRL algorithms can start from a blank slate, and under the right conditions, they achieve superhuman performance.

Here, we offer different research topics as master thesis including:

1) Investigating DRL with other ML algorithms such as meta-learning, life-long learning, active-learning, generative model, Bayesian deep learning. 2) Systematically study and benchmarking the value function, state and reward function, actions, and different hyperparameters 3) Study and benchmarking the policy evaluation techniques 4) Investigating the application of DRL in computer vision or NLP. For details please refer to the Google doc.

Comparison of Machine Learning Models For Competing Risks Survival Analysis

Survival analysis is increasingly gaining attention in the Machine Learning community. Recently, there have been several contributions that put this statistically flavoured topic into the Machine Learning context. However, comparisons and benchmarks of these methods are lacking. We are looking for an M.Sc. student to benchmark state-of-the-art machine learning models for competing risks survival analysis. For details refer to the Google doc.

A Neural Network-based Approach for Feature-weighted Elastic Net

In this Master thesis you will implement the recently proposed fwelnet algorithm into a neural network. fwelnet can be defined in a neural network with either conventional layers or using cvxlayers. In simulation studies you will compare the two implementations against each other and also against an existing R package. You will further investigate the model performance when using more complex additive predictors or deep neural networks to feed information into the model parameters.

For details please see this Markdown file

Deep Distributional Regression

Deep Distributional Regression (DDR) extends classical statistical regression models such as generalized linear or additive models as well as generalized additive models for location, scale and shape to regression models with potentially many deep neural networks. In this Bachelor thesis you will investigate the performance of the one DDR framework in a large simulation and benchmark study and thereby help to generate new ideas and suggestions for improvement in the field of DDR.

For details refer to the Google Doc

Semi-Structured Deep Distributional Regression Penalties

The Semi-Structured Deep Distributional Regression (SDDR) framework extends classical statistical regression models such as generalized linear or additive models as well as generalized additive models for location, scale and shape to regression models with potentially many deep neural networks in the linear predictor of each parameter. SDDR allows for different penalties including L1-, L2- or other custom penalties like smoothing penalties. In Deep Learning different other penalties such as L0-penalties are available for neural networks. In this thesis you will extend the SDDR framework by implementing new penalties and investigate the performance of your implementation in simulations and benchmarks.

For details refer to the Google Doc

Analysis of Overfitting Effects During Model Selection

In this master thesis you will analyze, whether model selection/hyperparameter tuning can suffer from overfitting. For details refer to the Google doc.

Hierarchical word embeddings using hyperbolic neural networks

In this master thesis you will investigate recent advancements in Geometric Deep Learning with a specific focus on Natural Language Processing. Geometric deep learning operates in spaces with non-zero curvatures. Fields of application include computer vision (three-dimensional objects), genetics, neuroscience or virtually any kind of graph data. Especially hyperbolic deep learning is currently of interest to the machine learning community. These models are particularly promising when there are hierarchical structures in the data. For details refer to the Google Doc.

R package for Counterfactual Explanation Methods

Machine learning models have demonstrated great success in many prediction tasks on the cost of being less interpretable. A useful method for explaining single predictions of a model are counterfactual explanations or short counterfactuals. In this master thesis you will make yourself familiar with the literature on counterfactual explanation methods and will integrate some of the methods in the counterfactuals R package. Furthermore, you will evaluate the performance of the integrated methods and their runtimes.

For details refer to the Google Doc.


Disputation

Procedure

The disputation of a thesis lasts about 60-90 minutes and consists of two parts. Only the first part is relevant for the grade and takes 30 minutes (bachelor thesis) and 40 minutes (master thesis). Here, the student is expected to summarize his/her main results of the thesis in a presentation. The supervisor(s) will ask questions regarding the content of the thesis in between. In the second part (after the presentation), the supervisors will give detailed feedback and discuss the thesis with the student. This will take about 30 minutes.

FAQ

You have to prepare a presentation and if there is a bigger time gap between handing in your thesis and the disputation you might want to reread your thesis.

That’s up to you, but you have to respect the time limit. Prepariong more than 20 slides for a Bachelor’s presentation and more than 30 slides for a Master’s is VERY likely a very bad idea.

Bernd’s office, in front of the big TV. At least one PhD will be present, maybe more. If you want to present in front of a larger audience in the seminar room or the old library, please book the room yourself and inform us.

We do not care, you can choose.

A document (Prüfungsprotokoll) which you get from “Prüfungsamt” (Frau Maxa or Frau Höfner) for the disputation.Your laptop or a USB stick with the presentation. You can also email Bernd a PDF.

The student will be graded regarding the quality of the thesis, the presentation and the oral discussion of the work. The grade is mainly determined by the written thesis itself, but the grade can improve or drop depending on the presentation and your answers to defense questions.

The presentation should cover your thesis, including motivation, introduction, description of new methods and results of your research. Please do NOT explain already existing methods in detail here, put more focus on novel work and the results.

The questions will be directly connected to your thesis and related theory.


Student Research Projects

[Potential Thesis Topics] [Student Research Projects] [Current Theses] [Completed Theses]

We are always interested in mentoring interesting student research projects. Please contact us directly with an interesting resarch idea. In the future you will also be able to find research project topics below.

Available projects

Efficient Job Scheduling For High Performance Computing *

Large, possibly embarassingly parallel jobs, for example evaluations of ML algorithms on different datasets/resampling splits, often require large overheads in terms of scheduling. In order to reduce load on cluster workload managers and to improve flexibility, a new scheduling manager is to be created and improved, which allows for asynchronus scheduling on multiple clusters.

Improved Exploration and Experiment Selection *

Knowledge about machine learning algorithms and their behavious for different hyperparameters can be gathered from a large amount of evaluations of random algorithms and configurations on random datasets. As hyperparameter spaces widely differ from algorithm to algorithm, better selection can improve the exploration aspect of this approach. Methods from the Design and Analysis of Computer Experiments, DACE yield possibilities for more efficient exploration of those spaces. An approach for this could be using models in order to schedule experiments that optimize for exploration of the relevant hyper parameter space.

Interactive Visualization of HPC Experiment Results *

Deeper knowledge into qualitative relationships in large benchmark results can be obtained from interactive visualizations and modeling approaches. Within this project different vizualisation and modeling techniques are to be explored and evaluated. Interactive vizualisation methods that allow for zomm out and drill down need to be evaluated.

For more information please visit the official web page Studentische Forschungsprojekte (Lehre@LMU)


Current Theses (With Working Titles)

[Potential Thesis Topics] [Student Research Projects] [Current Theses] [Completed Theses]

Student Title Type
J. Bodensteiner Model Compression MA
T. Dassen Personality Prediction Based on Mobile Gaze and Touch Data MA

Completed Theses

[Potential Thesis Topics] [Student Research Projects] [Current Theses] [Completed Theses]

Completed Theses (LMU Munich)

Student Title Type Completed
R. Groh Benchmarking: Tests and Vizualisations MA 2019
S. Dandl Counterfactual Explanations MA 2019
T. Stüber Methodik, Anwendungen und Interpretation moderner Benchmark-Studien am Beispiel der MA 2019
  Risikomodellierung bei akuter Cholangitis    
J. Lin Machine Learning pipeline search with Bayesian Optimization and Reinforcement Learning MA 2019
S. Gruber Visualization and Efficient Replay Memory for Reinforcement Learning BA 2019
A. Bukreeva Neural Network Embeddings for Categorical Data BA 2019
M. Graber Localizing phosphorylation sites by deep learning-based fragment ion intensity MA 2019
B. Burger Average Marginal Effects in Machine Learning MA 2019
J. Goschenhofer   MA 2018
J. Moosbauer Bayesian Optimization under Noise for Model Selection in Machine Learning MA 2018
J. Fried Interpretable Machine Learning - An Application Study using the Munich Rent Index MA 2018
S. Coors Automatic Gradient Boosting MA 2018
D. Schalk Efficient and Distributed Model-Based Boosting for Large Datasets MA 2018
K. Engelhardt Linear individual model-agnostic explanations - discussion and empirical analysis of modifications MA 2018
N. Klein Extending Hyperband with Model-Based Sampling Strategies MA 2018
M. Dumke Reinforcement learning in R MA 2018
M. Lee Anomaly Detection using Machine Learning Methods MA 2018
J. Langer RNN Bandmatrix MA 2018
B. Klepper Configuration of deep neural networks using model-based optimization MA 2017
F. Pfisterer Kernelized anomaly detection MA 2017
M. Binder Automatic model selection amd hyperparameter optimization MA 2017
V. Mayer mlrMBO / RF distance based infill criteria MA 2017
L. Haller Kostensensitive Entscheidungsbäume für beobachtungsabhängige Kosten BA 2016
B. Zhang Implementation of 3D Model Visualization for Machine Learning BA 2016
T. Riebe Eine Simulationsstudie zum Sampled Boosting BA 2016
P. Rösch Implementation and Comparison of Stacking Methods for Machine Learning MA 2016
M. Erdmann Runtime estimation of ML models BA 2016
A.Exterkate Process Mining: Checking Methods for Process Conformance MA 2016
J.-Q. Au Implementation of Multilabel Algorithms and their Application on Driving Data MA 2016
  (J.-Q. Au was a master student of TU Dortmund)    
J. Thomas Stability Selection for Component-Wise Gradient Boosting in Multiple Dimensions MA 2016
A. Franz Detecting Future Equipment Failures: Predictive Maintenance in Chemical Industrial Plants MA 2016
T. Kühn Fault Detection for Fire Alarm Systems based on Sensor Data MA 2016
B. Schober Laufzeitanalyse von Klassifikationsverfahren in R BA 2015
F. Pfisterer Benchmark Analysis for Machine Learning in R BA 2015
T. Kühn Implementierung und Evaluation ergänzender Korrekturmethoden für statistische Lernverfahren BA 2014
  bei unbalancierten Klassifikationsproblemen    

Completed Theses (Supervised by Bernd Bischl at TU Dortmund)

Student Title Type Completed
P. Probst Anwendung von Multilabel-Klassifikationsverfahren auf Medizingerätestatusreporte zur Generierung von Reparaturvorschlägen MA 2015
D. Kirchhoff Erweiterung der Plattform OpenML um Ereigniszeitanalysen MA 2015
J. Bossek Modellgestützte Algorithmenkonfiguration bei Feature-basierten Instanzen: Ein Ansatz über das Profile-Expected-Improvement Dipl. 2015
J. Richter Modellbasierte Hyperparameteroptimierung für maschinelle Lernverfahren auf großen Daten MA 2015
B. Elkemann Implementierung einer Testsuite für mehrkriterielle Optimierungsprobleme BA 2014
M. Dagge R-Pakete für Datenmanagement und -manipulation großer Datensätze BA 2014
K. U. Schorck Lokale Kriging-Verfahren zur Modellierung und Optimierung gemischter Parameterräume mit Abhängigkeitsstrukturen BA 2014
P. Kerschke Kostensensitive Algorithmenselektion für stetige Black-Box-Optimierungsprobleme basierend auf explorativer Landschaftsanalyse MA 2013
D. Horn Exploratory Landscape Analysis für mehrkriterielle Optimierungsprobleme MA 2013
J. Bossek Feature-based Algorithm Selection for the Traveling-Salesman-Problem BA 2013
O. Meyer Implementierung und Untersuchung einer parallelen Support Vector Machine in R Dipl. 2013
S. Hess Sequential Model-Based Optimization by Ensembles: A Reinforcement Learning Based Approach Dipl. 2012
P. Kerschke Vorhersage der Verkehrsdichte in Warschau basierend auf dem Traffic Simulation Framework BA 2011
L. Schlieker Klassifikation von Blutgefäßen und Neuronen des menschlichen Gehirns anhand von ultramikroskopierten 3D-Bilddaten BA 2011
H. Riedel Uncertainty Sampling zur Auswahl optimaler Sampler aus der trunkierten Normalverteilung BA 2011
S. Meinke Over-/Undersampling für unbalancierte Klassifikationsprobleme im Zwei-Klassen-Fall BA 2010