Our research group conducts fundamental research at the intersection of computer security and machine learning. On the one end, we are interested in developing intelligent systems that can learn to protect computers from attacks and identify security problems automatically. On the other end, we explore the security and privacy of machine learning by developing novel attacks and defenses.
We are part of the new Berlin Institute for the Foundations of Learning and Data (BIFOLD) at Technische Universität Berlin. Previously, we have been working at Technische Universität Braunschweig and the University of Göttingen.
August 26, 2024 — We are hiring! 🧑💻 We have a new open PhD student position in our research group. Further details are available here. The deadline for application is September 13, 2024.
August 14, 2024 — We are happy to receive the Distinguished Paper Award at the USENIX Security Symposium for our work on blind cross-site scripting. This was a great collaboration with our friends from TU Braunschweig.
July 4, 2024 — We present four papers at ASIACCS in Singapore, 🇸🇬. Felix discusses target selection in fuzzing, Jonas explores differential testing of JSON, Josiane tackles simbox fraud, and Alwin investigates instruction embeddings.
May 20, 2024 — We are honored and grateful to receive the IEEE S&P Test-of-Time Award for our work on code property graphs. Thank you all so much! 🏆😊
See all news and updates of the research group.
PhD Student Position: Security of Generative AI — Salary Grade TV-L 13
We are looking for another PhD student to join the research project AIGENCY. This project investigates the opportunities and risks of generative AI (such as ChatGPT) from the perspective of security. It explores the attack surface of generative AI models and develops novel attack and protection mechanisms.
See all job openings.
Evil from Within: Machine Learning Backdoors Through Dormant Hardware Trojans.
Proc. of the 40th Annual Computer Security Applications Conference (ACSAC), 2024. (to appear)
Dancer in the Dark: Synthesizing and Evaluating Polyglots for Blind Cross-Site Scripting.
Proc. of the 33rd USENIX Security Symposium, 2024.
Distinguished Paper Award
SoK: Where to Fuzz? Assessing Target Selection Methods in Directed Fuzzing.
Proc. of the 19th ACM Asia Conference on Computer and Communications Security (ASIACCS), 2024.
Cross-Language Differential Testing of JSON Parsers.
Proc. of the 19th ACM Asia Conference on Computer and Communications Security (ASIACCS), 2024.
See all publications of the research group.
AIGENCY — Opportunities and Risks of Generative AI in Security
The project aims to systematically investigate the opportunities and risks of generative artificial intelligence in computer security. It explores generative models as a new tool as well as a new threat. The project is joint work with Fraunhofer AISEC, CISPA, FU Berlin, and Aleph Alpha.
MALFOY — Machine Learning for Offensive Computer Security
The ERC Consolidator Grant MALFOY explores the application of machine learning in offensive computer security. It is an effort to understand how learning algorithms can be used by attackers and how this threat can be effectively mitigated.
ALISON — Attacks against Machine Learning in Structured Domains
The goal of this project is to investigate the security of learning algorithms in structured domains. That is, the project develops a better understanding of attacks and defenses that operate in the problem space of learning algorithms rather than the feature space.
See all projects of the research group.
Technische Universität Berlin
Machine Learning and Security, TEL 8-2
Ernst-Reuter-Platz 7
10587 Berlin, Germany
Office: office@mlsec.tu-berlin.de
Responsibility under the German Press Law §55 Sect. 2 RStV:
Prof. Dr. Konrad Rieck