We offer a number of courses each semester that revolve around machine learning and security. These include lectures on learning algorithms in security systems and adversarial machine learning as well as our labs where people can experiment with attacks and malicious code. Teaching is fun for us and so we have been able to even win awards for our lectures and practical courses.
AML — Adversarial Machine Learning
This integrated lecture is concerned with adversarial machine learning. It explores various attacks on learning algorithms, including white-box and black-box adversarial examples, poisoning, backdoors, membership inference, and model extraction. It also examines the security and privacy implications of these attacks and discusses defensive strategies, ranging from threat modeling to integrated countermeasures.
This lab is a hands-on course that explores machine learning in computer security. Students design and develop intelligent systems for security problems such as attack detection, malware clustering, and vulnerability discovery. The developed systems are trained and evaluated on real-world data, providing insight into their strengths and weaknesses in practice. The lab is a continuation of the lecture "Machine Learning for Computer Security" and thus knowledge from that course is expected.
AURA — Automatic Vulnerability Repair and Analysis
This project explores recent advances in automated software analysis and repair. Students will develop, implement, and evaluate techniques for analyzing source code, identifying security vulnerabilities, and automatically creating patches. The project is inspired by the DARPA AI Cyber Challenge (AIxCC) with the goal of developing AI-driven code analysis, identifying its capabilities, but also uncovering its limitations.
RAID — Reproducing AI Attacks and Defense
This project puts recent AI research to the test. Participants will re-implement current attack and defense techniques that utilize machine learning, evaluate their capabilities, and design improvements. Possible techniques include attacks and defenses for large language models and computer vision systems. The overall goal is to learn about the state of the art in AI security and reproduce results where possible.
LEAK — Unusual Side Channels and Privacy Leaks
In this block seminar, we will look at unusual ways in which an attacker can obtain secret information. We examine various physical side channels through which information can escape from a computer, such as acoustic, optical, and electromagnetic leaks. We also examine the security and privacy implications of the attacks and discuss appropriate defenses. The seminar is aimed at Bachelor students. No prior knowledge of side channels is required, but a strong interest is assumed.
CARE — Code Analysis and Reverse Engineering
This block seminar is concerned with the analysis and reverse engineering of code. We will cover different techniques for program analysis of source code and binary code. In addition, we will look at concepts for understanding unknown software, reverse engineering its functionality, and discovering security vulnerabilities. The seminar is intended for Master students.
Below is a list of all the courses we have offered in recent years. Note that some courses are not offered regularly, while others are planned and not yet available. Please consult the respective pages on the ISIS platform of TU Berlin.
Are you looking for an exciting topic for your Bachelor or Master thesis? We offer research-oriented thesis topics on machine learning and security, which we design together with the students. Contact Prof. Rieck by email and ask for further details. Please include the result of (23**42)%2248
in the subject line.