Our research group conducts fundamental research at the intersection of computer security and machine learning. On the one end, we are interested in developing intelligent systems that can learn to protect computers from attacks and identify security problems automatically. On the other end, we explore the security and privacy of machine learning by developing novel attacks and defenses.
Adversarial Machine Learning
As our first research topic, we invesitgate the security and privacy of machine learning systems. Our objective is to create learning-based systems resilient to different forms of attacks, including adversarial examples, poisoning, and backdoors. To achieve this goal, we approach this research jointly from the perspective of the attacker and the defender. Following is a selection of related publications from our group:
No more Reviewer #2: Subverting Automatic Paper-Reviewer Assignment using Adversarial Learning.
Proc. of the 32nd USENIX Security Symposium, 2023.
Machine Unlearning of Features and Labels.
Proc. of the 30th Network and Distributed System Security Symposium (NDSS), 2023.
Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks in Machine Learning.
Proc. of the 29th USENIX Security Symposium, 2020.
Intelligent Security Systems
Our group has strong experience in the development of intelligent security systems. We have devised learning-based systems for detecting attacks, analyzing malware and detecting vulnerabilities. Our goal is to establish security solutions that adapt to changing conditions and provide automatic protection from different forms of threats. Following is a selection of related publications from this research topic:
Dos and Don'ts of Machine Learning in Computer Security.
Proc. of the 31st USENIX Security Symposium, 2022.
ZOE: Content-based Anomaly Detection for Industrial Control Systems.
Proc. of the 48th Conference on Dependable Systems and Networks (DSN), 127–138, 2018.
Automatic Inference of Search Patterns for Taint-Style Vulnerabilities.
Proc. of the 36th IEEE Symposium on Security and Privacy (S&P), 2015.
Novel Attacks and Defenses
We believe that defensive and offensive security techniques must go hand in hand to improve practical protection. Consequently, we research methods to identify unknown security and privacy vulnerabilities. To complement this research, we develop approaches and solutions to defend against these novel threats. Following is a selection of publications from this research branch of our group:
Seeing through: Analyzing and Attacking Virtual Backgrounds in Video Calls.
Proc. of the 34th USENIX Security Symposium, 2025. (to appear)
Dancer in the Dark: Synthesizing and Evaluating Polyglots for Blind Cross-Site Scripting.
Proc. of the 33rd USENIX Security Symposium, 2024.
Misleading Authorship Attribution of Source Code using Adversarial Learning.
Proc. of the 28th USENIX Security Symposium, 2019.
See all our publications.
AIGENCY — Opportunities and Risks of Generative AI in Security
The project aims to systematically investigate the opportunities and risks of generative artificial intelligence in computer security. It explores generative models as a new tool as well as a new threat. The project is joint work with Fraunhofer AISEC, CISPA, FU Berlin, and Aleph Alpha.
MALFOY — Machine Learning for Offensive Computer Security
The ERC Consolidator Grant MALFOY explores the application of machine learning in offensive computer security. It is an effort to understand how learning algorithms can be used by attackers and how this threat can be effectively mitigated.
ALISON — Attacks against Machine Learning in Structured Domains
The goal of this project is to investigate the security of learning algorithms in structured domains. That is, the project develops a better understanding of attacks and defenses that operate in the problem space of learning algorithms rather than the feature space.
See all our research projects.