Research Group
Machine Learning
and Security
View from our building over Berlin.

Research

Our research group conducts fundamental research at the intersection of computer security and machine learning. On the one end, we are interested in developing intelligent systems that can learn to protect computers from attacks and identify security problems automatically. On the other end, we explore the security and privacy of machine learning by developing novel attacks and defenses.

Current Research Topics

Adversarial Machine Learning

As our first research topic, we invesitgate the security and privacy of machine learning systems. Our objective is to create learning-based systems resilient to different forms of attacks, including adversarial examples, poisoning, and backdoors. To achieve this goal, we approach this research jointly from the perspective of the attacker and the defender. Following is a selection of related publications from our group:

Machine Unlearning of Features and Labels.
Proc. of the 30th Network and Distributed System Security Symposium (NDSS), 2023.

Intelligent Security Systems

Our group has strong experience in the development of intelligent security systems. We have devised learning-based systems for detecting attacks, analyzing malware and detecting vulnerabilities. Our goal is to establish security solutions that adapt to changing conditions and provide automatic protection from different forms of threats. Following is a selection of related publications from this research topic:

Dos and Don'ts of Machine Learning in Computer Security.
Proc. of the 31st USENIX Security Symposium, 2022.

ZOE: Content-based Anomaly Detection for Industrial Control Systems.
Proc. of the 48th Conference on Dependable Systems and Networks (DSN), 127–138, 2018.

Drebin: Efficient and Explainable Detection of Android Malware in Your Pocket.
Proc. of the 21st Network and Distributed System Security Symposium (NDSS), 2014.

Novel Attacks and Defenses

We believe that defensive and offensive security techniques must go hand in hand to improve practical protection. Consequently, we research methods to identify unknown security and privacy vulnerabilities. To complement this research, we develop approaches and solutions to defend against these novel threats. Following is a selection of publications from this research branch of our group:

Dancer in the Dark: Synthesizing and Evaluating Polyglots for Blind Cross-Site Scripting.
Proc. of the 33rd USENIX Security Symposium, 2024. (to appear)

Misleading Authorship Attribution of Source Code using Adversarial Learning.
Proc. of the 28th USENIX Security Symposium, 2019.

Automatic Inference of Search Patterns for Taint-Style Vulnerabilities.
Proc. of the 36th IEEE Symposium on Security and Privacy (S&P), 2015.

See all our publications.

Running Research Projects

AIGENCY — Opportunities and Risks of Generative AI in Security

The project aims to systematically investigate the opportunities and risks of generative artificial intelligence in computer security. It explores generative models as a new tool as well as a new threat. The project is joint work with Fraunhofer AISEC, CISPA, FU Berlin, and Aleph Alpha.

BMBF 2023 – 2026

MALFOY — Machine Learning for Offensive Computer Security

The ERC Consolidator Grant MALFOY explores the application of machine learning in offensive computer security. It is an effort to understand how learning algorithms can be used by attackers and how this threat can be effectively mitigated.

ERC 2023 – 2028 Website

ALISON — Attacks against Machine Learning in Structured Domains

The goal of this project is to investigate the security of learning algorithms in structured domains. That is, the project develops a better understanding of attacks and defenses that operate in the problem space of learning algorithms rather than the feature space.

DFG 2023 – 2026

See all our research projects.