Machine Learning
and Security
Website of the
Chair of Machine Learning and Security
View from our building over Berlin.

Welcome and Overview

Our research group conducts fundamental research at the intersection of computer security and machine learning. On the one end, we are interested in developing intelligent systems that can learn to protect computers from attacks and identify security problems automatically. On the other end, we explore the security and privacy of machine learning by developing novel attacks and defenses.

We are part of the new Berlin Institute for the Foundations of Learning and Data (BIFOLD). Previously, we have been working at Technische Universität Braunschweig and the University of Göttingen.

Recent publications

Dancer in the Dark: Synthesizing and Evaluating Polyglots for Blind Cross-Site Scripting.
Robin Kirchner, Jonas Möller, Marius Musch, David Klein, Konrad Rieck and Martin Johns.
Proc. of the 33rd USENIX Security Symposium, 2024. (to appear)

PDF

On the Detection of Image-Scaling Attacks in Machine Learning.
Erwin Quiring, Andreas Müller and Konrad Rieck.
Proc. of the 39th Annual Computer Security Applications Conference (ACSAC), 2023.

PDF Code

PAVUDI: Patch-based Vulnerability Discovery using Machine Learning.
Tom Ganz, Erik Imgrund, Martin Härterich and Konrad Rieck.
Proc. of the 39th Annual Computer Security Applications Conference (ACSAC), 2023.

PDF Code

Broken Promises: Measuring Confounding Effects in Learning-based Vulnerability Discovery.
Erik Imgrund, Tom Ganz, Martin Härterich, Niklas Risse, Lukas Pirch and Konrad Rieck.
Proc. of the 16th ACM Workshop on Artificial Intelligence and Security (AISEC), 2023.

PDF

See all publications of the research group.

Current projects

AIGENCY — Opportunities and Risks of Generative AI in Security

The project aims to systematically investigate the opportunities and risks of generative artificial intelligence in computer security. It explores generative models as a new tool as well as a new threat. The project is joint work with Fraunhofer AISEC, CISPA, FU Berlin, and Aleph Alpha.

BMBF 2023 – 2026

MALFOY — Machine Learning for Offensive Computer Security

The ERC Consolidator Grant MALFOY explores the application of machine learning in offensive computer security. It is an effort to understand how learning algorithms can be used by attackers and how this threat can be effectively mitigated.

ERC 2023 – 2028 Website

ALISON — Attacks against Machine Learning in Structured Domains

The goal of this project is to investigate the security of learning algorithms in structured domains. That is, the project develops a better understanding of attacks and defenses that operate in the problem space of learning algorithms rather than the feature space.

DFG 2023 – 2026

See all projects of the research group.

Job Applications

We are generally looking for motivated and skilled PhD students and postdocs to join our group—even if we currently do not announce open positions. If you are passionate about research and interested in combining machine learning and computer security, contact us directly at jobs@mlsec.org.

Before writing an unsolicited email, take some time to write a good cover letter. In this letter, you should describe why you are a good fit for our group and what research you enjoy doing. Include the result of (0x62df**215)%0xf0e5 in the subject line of your email.

Contact

Technische Universität Berlin
Machine Learning and Security, TEL 8-2
Ernst-Reuter-Platz 7
10587 Berlin, Germany

Office: office@mlsec.tu-berlin.de
Responsibility under the German Press Law §55 Sect. 2 RStV:
Prof. Dr. Konrad Rieck