This webpage is an attempt to assemble a ranking of top-cited security papers from the 2020s. The ranking has been created based on citations of papers published at top security conferences. More details are available here.
Top-cited papers from 2025 ⌄
1
Jan Lauinger, Jens Ernstberger, Andreas Finkenzeller, and Sebastian Steinhorst: Janus: Fast Privacy-Preserving Data Provenance For TLS. Proceedings on Privacy Enhancing Technologies (PoPETS), 2025
Jayshree Sarathy and Salil P. Vadhan: Analyzing the Differentially Private Theil-Sen Estimator for Simple Linear Regression. Proceedings on Privacy Enhancing Technologies (PoPETS), 2025
Cong Zuo, Shangqi Lai, Shi-Feng Sun, Xingliang Yuan, Joseph K. Liu, Jun Shao, Huaxiong Wang, Liehuang Zhu, and Shujie Cui: Searchable Encryption for Conjunctive Queries with Extended Forward and Backward Privacy. Proceedings on Privacy Enhancing Technologies (PoPETS), 2025
Christopher Harth-Kitzerow, Ajith Suresh, Yongqin Wang, Hossein Yalame, Georg Carle, and Murali Annavaram: High-Throughput Secure Multiparty Computation with an Honest Majority in Various Network Settings. Proceedings on Privacy Enhancing Technologies (PoPETS), 2025
Fredrik Meisingseth and Christian Rechberger: SoK: Computational and Distributed Differential Privacy for MPC. Proceedings on Privacy Enhancing Technologies (PoPETS), 2025
Fredrik Meisingseth, Christian Rechberger, and Fabian Schmid: Practical Two-party Computational Differential Privacy with Active Security. Proceedings on Privacy Enhancing Technologies (PoPETS), 2025
Xinyue Shen, Zeyuan Chen, Michael Backes, Yun Shen, and Yang Zhang: "Do Anything Now": Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models. ACM Conference on Computer and Communications Security (CCS), 2024
Gelei Deng, Yi Liu, Yuekang Li, Kailong Wang, Ying Zhang, Zefeng Li, Haoyu Wang, Tianwei Zhang, and Yang Liu: MASTERKEY: Automated Jailbreaking of Large Language Model Chatbots. Network and Distributed System Security Symposium (NDSS), 2024
Nicholas Carlini, Matthew Jagielski, Christopher A. Choquette-Choo, Daniel Paleka, Will Pearce, Hyrum S. Anderson, Andreas Terzis, Kurt Thomas, and Florian Tramèr: Poisoning Web-Scale Training Datasets is Practical. IEEE Symposium on Security and Privacy (S&P), 2024
Zeyang Sha, Yicong Tan, Mingjie Li, Michael Backes, and Yang Zhang: ZeroFake: Zero-Shot Detection of Fake Images Generated and Edited by Text-to-Image Generation Models. ACM Conference on Computer and Communications Security (CCS), 2024
Ruijie Meng, Martin Mirchev, Marcel Böhme, and Abhik Roychoudhury: Large Language Model guided Protocol Fuzzing. Network and Distributed System Security Symposium (NDSS), 2024
Yuchen Yang, Bo Hui, Haolin Yuan, Neil Gong, and Yinzhi Cao: SneakyPrompt: Jailbreaking Text-to-image Generative Models. IEEE Symposium on Security and Privacy (S&P), 2024
Zhiyuan Yu, Xiaogeng Liu, Shunning Liang, Zach Cameron, Chaowei Xiao, and Ning Zhang: Don't Listen To Me: Understanding and Exploring Jailbreak Prompts of Large Language Models. USENIX Security Symposium, 2024
Hojjat Aghakhani, Wei Dai, Andre Manoel, Xavier Fernandes, Anant Kharkar, Christopher Kruegel, Giovanni Vigna, David Evans, Ben Zorn, and Robert Sim: TrojanPuzzle: Covertly Poisoning Code-Suggestion Models. IEEE Symposium on Security and Privacy (S&P), 2024
Xinlei He, Savvas Zannettou, Yun Shen, and Yang Zhang: You Only Prompt Once: On the Capabilities of Prompt Learning on Large Language Models to Tackle Toxic Content. IEEE Symposium on Security and Privacy (S&P), 2024
Mingxun Zhou, Andrew Park, Wenting Zheng, and Elaine Shi: Piano: Extremely Simple, Single-Server PIR with Sublinear Server Computation. IEEE Symposium on Security and Privacy (S&P), 2024
Nicholas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramèr, Borja Balle, Daphne Ippolito, and Eric Wallace: Extracting Training Data from Diffusion Models. USENIX Security Symposium, 2023
Hammond Pearce, Benjamin Tan, Baleegh Ahmad, Ramesh Karri, and Brendan Dolan-Gavitt: Examining Zero-Shot Vulnerability Repair with Large Language Models. IEEE Symposium on Security and Privacy (S&P), 2023
Nils Lukas, Ahmed Salem, Robert Sim, Shruti Tople, Lukas Wutschitz, and Santiago Zanella Béguelin: Analyzing Leakage of Personally Identifiable Information in Language Models. IEEE Symposium on Security and Privacy (S&P), 2023
Yi Zeng, Minzhou Pan, Hoang Anh Just, Lingjuan Lyu, Meikang Qiu, and Ruoxi Jia: Narcissus: A Practical Clean-Label Backdoor Attack with Limited Information. ACM Conference on Computer and Communications Security (CCS), 2023
Maurice Weber, Xiaojun Xu, Bojan Karlas, Ce Zhang, and Bo Li: RAB: Provable Robustness Against Backdoor Attacks. IEEE Symposium on Security and Privacy (S&P), 2023
Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh: Do Users Write More Insecure Code with AI Assistants? ACM Conference on Computer and Communications Security (CCS), 2023
Liyi Zhou, Xihan Xiong, Jens Ernstberger, Stefanos Chaliasos, Zhipeng Wang, Ye Wang, Kaihua Qin, Roger Wattenhofer, Dawn Song, and Arthur Gervais: SoK: Decentralized Finance (DeFi) Attacks. IEEE Symposium on Security and Privacy (S&P), 2023
Alexander Warnecke, Lukas Pirch, Christian Wressnegger, and Konrad Rieck: Machine Unlearning of Features and Labels. Network and Distributed System Security Symposium (NDSS), 2023
Nicholas Carlini, Steve Chien, Milad Nasr, Shuang Song, Andreas Terzis, and Florian Tramèr: Membership Inference Attacks From First Principles. IEEE Symposium on Security and Privacy (S&P), 2022
Hammond Pearce, Baleegh Ahmad, Benjamin Tan, Brendan Dolan-Gavitt, and Ramesh Karri: Asleep at the Keyboard? Assessing the Security of GitHub Copilot's Code Contributions. IEEE Symposium on Security and Privacy (S&P), 2022
Daniel Arp, Erwin Quiring, Feargus Pendlebury, Alexander Warnecke, Fabio Pierazzi, Christian Wressnegger, Lorenzo Cavallaro, and Konrad Rieck: Dos and Don'ts of Machine Learning in Computer Security. USENIX Security Symposium, 2022
Ahmed Salem, Rui Wen, Michael Backes, Shiqing Ma, and Yang Zhang: Dynamic Backdoor Attacks Against Machine Learning Models. IEEE European Symposium on Security and Privacy (EuroS&P), 2022
Virat Shejwalkar, Amir Houmansadr, Peter Kairouz, and Daniel Ramage: Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Production Federated Learning. IEEE Symposium on Security and Privacy (S&P), 2022
Kaihua Qin, Liyi Zhou, and Arthur Gervais: Quantifying Blockchain Extractable Value: How dark is the forest? IEEE Symposium on Security and Privacy (S&P), 2022
Jiayuan Ye, Aadyaa Maddi, Sasi Kumar Murakonda, Vincent Bindschaedler, and Reza Shokri: Enhanced Membership Inference Attacks against Machine Learning Models. ACM Conference on Computer and Communications Security (CCS), 2022
Nicholas Carlini, Florian Tramèr, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom B. Brown, Dawn Song, Úlfar Erlingsson, Alina Oprea, and Colin Raffel: Extracting Training Data from Large Language Models. USENIX Security Symposium, 2021
Lucas Bourtoule, Varun Chandrasekaran, Christopher A. Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot: Machine Unlearning. IEEE Symposium on Security and Privacy (S&P), 2021
Xiaoyu Cao, Minghong Fang, Jia Liu, and Neil Zhenqiang Gong: FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping. Network and Distributed System Security Symposium (NDSS), 2021
Virat Shejwalkar and Amir Houmansadr: Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning. Network and Distributed System Security Symposium (NDSS), 2021
Lorenzo Grassi, Dmitry Khovratovich, Christian Rechberger, Arnab Roy, and Markus Schofnegger: Poseidon: A New Hash Function for Zero-Knowledge Proof Systems. USENIX Security Symposium, 2021
Xiaojun Xu, Qi Wang, Huichen Li, Nikita Borisov, Carl A. Gunter, and Bo Li: Detecting AI Trojans Using Meta Neural Analysis. IEEE Symposium on Security and Privacy (S&P), 2021
Ellis Fenske, Dane Brown, Jeremy Martin, Travis Mayberry, Peter Ryan, and Erik C. Rye: Three Years Later: A Study of MAC Address Randomization In Mobile Devices And When It Succeeds. Proceedings on Privacy Enhancing Technologies (PoPETS), 2021
Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong: Local Model Poisoning Attacks to Byzantine-Robust Federated Learning. USENIX Security Symposium, 2020
Vale Tolpegin, Stacey Truex, Mehmet Emre Gursoy, and Ling Liu: Data Poisoning Attacks Against Federated Learning Systems. European Symposium on Research in Computer Security (ESORICS), 2020
Jianbo Chen, Michael I. Jordan, and Martin J. Wainwright: HopSkipJumpAttack: A Query-Efficient Decision-Based Attack. IEEE Symposium on Security and Privacy (S&P), 2020
Philip Daian, Steven Goldfeder, Tyler Kell, Yunqi Li, Xueyuan Zhao, Iddo Bentov, Lorenz Breidenbach, and Ari Juels: Flash Boys 2.0: Frontrunning in Decentralized Exchanges, Miner Extractable Value, and Consensus Instability. IEEE Symposium on Security and Privacy (S&P), 2020
Pratyush Mishra, Ryan Lehmkuhl, Akshayaram Srinivasan, Wenting Zheng, and Raluca Ada Popa: Delphi: A Cryptographic Inference Service for Neural Networks. USENIX Security Symposium, 2020
James Henry Bell, Kallista A. Bonawitz, Adrià Gascón, Tancrède Lepoint, and Mariana Raykova: Secure Single-Server Aggregation with (Poly)Logarithmic Overhead. ACM Conference on Computer and Communications Security (CCS), 2020
Matthew Jagielski, Nicholas Carlini, David Berthelot, Alex Kurakin, and Nicolas Papernot: High Accuracy and High Fidelity Extraction of Neural Networks. USENIX Security Symposium, 2020
Dingfan Chen, Ning Yu, Yang Zhang, and Mario Fritz: GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models. ACM Conference on Computer and Communications Security (CCS), 2020
Kit Murdock, David F. Oswald, Flavio D. Garcia, Jo Van Bulck, Daniel Gruss, and Frank Piessens: Plundervolt: Software-based Fault Injection Attacks against Intel SGX. IEEE Symposium on Security and Privacy (S&P), 2020