This webpage is an attempt to assemble a ranking of top-cited security papers from the 2020s. The ranking has been created based on citations of papers published at top security conferences. More details are available here.
Top-cited papers from 2024 ⌄
1
Zichen Gui, Kenneth G. Paterson, Sikhar Patranabis, and Bogdan Warinschi: SWiSSSE: System-Wide Security for Searchable Symmetric Encryption. Proceedings on Privacy Enhancing Technologies (PoPETS), 2024
Edoardo Debenedetti, Giorgio Severi, Milad Nasr, Christopher A. Choquette-Choo, Matthew Jagielski, Eric Wallace, Nicholas Carlini, and Florian Tramèr: Privacy Side Channels in Machine Learning Systems. USENIX Security Symposium, 2024
Xinyue Shen, Yiting Qu, Michael Backes, and Yang Zhang: Prompt Stealing Attacks Against Text-to-Image Generation Models. USENIX Security Symposium, 2024
Matthieu Meeus, Shubham Jain, Marek Rei, and Yves-Alexandre de Montjoye: Did the Neurons Read your Book? Document-level Membership Inference for Large Language Models. USENIX Security Symposium, 2024
Zengrui Liu, Umar Iqbal, and Nitesh Saxena: Opted Out, Yet Tracked: Are Regulations Enough to Protect Your Privacy? Proceedings on Privacy Enhancing Technologies (PoPETS), 2024
Chao Wang, Yue Zhang, and Zhiqiang Lin: RootFree Attacks: Exploiting Mobile Platform's Super Apps From Desktop. ACM Asia Conference on Computer and Communications Security (AsiaCCS), 2024
Zhiyuan Yu, Xiaogeng Liu, Shunning Liang, Zach Cameron, Chaowei Xiao, and Ning Zhang: Don't Listen To Me: Understanding and Exploring Jailbreak Prompts of Large Language Models. USENIX Security Symposium, 2024
Chongzhou Fang, Ning Miao, Shaurya Srivastav, Jialin Liu, Ruoyu Zhang, Ruijie Fang, Asmita, Ryan Tsang, Najmeh Nazari, Han Wang, and Houman Homayoun: Large Language Models for Code Analysis: Do LLMs Really Do Their Job? USENIX Security Symposium, 2024
Nicholas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramèr, Borja Balle, Daphne Ippolito, and Eric Wallace: Extracting Training Data from Diffusion Models. USENIX Security Symposium, 2023
Hammond Pearce, Benjamin Tan, Baleegh Ahmad, Ramesh Karri, and Brendan Dolan-Gavitt: Examining Zero-Shot Vulnerability Repair with Large Language Models. IEEE Symposium on Security and Privacy (S&P), 2023
Yi Zeng, Minzhou Pan, Hoang Anh Just, Lingjuan Lyu, Meikang Qiu, and Ruoxi Jia: Narcissus: A Practical Clean-Label Backdoor Attack with Limited Information. ACM Conference on Computer and Communications Security (CCS), 2023
Maurice Weber, Xiaojun Xu, Bojan Karlas, Ce Zhang, and Bo Li: RAB: Provable Robustness Against Backdoor Attacks. IEEE Symposium on Security and Privacy (S&P), 2023
Nils Lukas, Ahmed Salem, Robert Sim, Shruti Tople, Lukas Wutschitz, and Santiago Zanella Béguelin: Analyzing Leakage of Personally Identifiable Information in Language Models. IEEE Symposium on Security and Privacy (S&P), 2023
Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh: Do Users Write More Insecure Code with AI Assistants? ACM Conference on Computer and Communications Security (CCS), 2023
Liyi Zhou, Xihan Xiong, Jens Ernstberger, Stefanos Chaliasos, Zhipeng Wang, Ye Wang, Kaihua Qin, Roger Wattenhofer, Dawn Song, and Arthur Gervais: SoK: Decentralized Finance (DeFi) Attacks. IEEE Symposium on Security and Privacy (S&P), 2023
Zeyang Sha, Zheng Li, Ning Yu, and Yang Zhang: DE-FAKE: Detection and Attribution of Fake Images Generated by Text-to-Image Generation Models. ACM Conference on Computer and Communications Security (CCS), 2023
Nicholas Carlini, Steve Chien, Milad Nasr, Shuang Song, Andreas Terzis, and Florian Tramèr: Membership Inference Attacks From First Principles. IEEE Symposium on Security and Privacy (S&P), 2022
Daniel Arp, Erwin Quiring, Feargus Pendlebury, Alexander Warnecke, Fabio Pierazzi, Christian Wressnegger, Lorenzo Cavallaro, and Konrad Rieck: Dos and Don'ts of Machine Learning in Computer Security. USENIX Security Symposium, 2022
Hammond Pearce, Baleegh Ahmad, Benjamin Tan, Brendan Dolan-Gavitt, and Ramesh Karri: Asleep at the Keyboard? Assessing the Security of GitHub Copilot's Code Contributions. IEEE Symposium on Security and Privacy (S&P), 2022
Ahmed Salem, Rui Wen, Michael Backes, Shiqing Ma, and Yang Zhang: Dynamic Backdoor Attacks Against Machine Learning Models. IEEE European Symposium on Security and Privacy (EuroS&P), 2022
Kaihua Qin, Liyi Zhou, and Arthur Gervais: Quantifying Blockchain Extractable Value: How dark is the forest? IEEE Symposium on Security and Privacy (S&P), 2022
Virat Shejwalkar, Amir Houmansadr, Peter Kairouz, and Daniel Ramage: Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Production Federated Learning. IEEE Symposium on Security and Privacy (S&P), 2022
Jiayuan Ye, Aadyaa Maddi, Sasi Kumar Murakonda, Vincent Bindschaedler, and Reza Shokri: Enhanced Membership Inference Attacks against Machine Learning Models. ACM Conference on Computer and Communications Security (CCS), 2022
Dipanjan Das, Priyanka Bose, Nicola Ruaro, Christopher Kruegel, and Giovanni Vigna: Understanding Security Issues in the NFT Ecosystem. ACM Conference on Computer and Communications Security (CCS), 2022
Nicholas Carlini, Florian Tramèr, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom B. Brown, Dawn Song, Úlfar Erlingsson, Alina Oprea, and Colin Raffel: Extracting Training Data from Large Language Models. USENIX Security Symposium, 2021
Lucas Bourtoule, Varun Chandrasekaran, Christopher A. Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot: Machine Unlearning. IEEE Symposium on Security and Privacy (S&P), 2021
Xiaoyu Cao, Minghong Fang, Jia Liu, and Neil Zhenqiang Gong: FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping. Network and Distributed System Security Symposium (NDSS), 2021
Virat Shejwalkar and Amir Houmansadr: Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning. Network and Distributed System Security Symposium (NDSS), 2021
Lorenzo Grassi, Dmitry Khovratovich, Christian Rechberger, Arnab Roy, and Markus Schofnegger: Poseidon: A New Hash Function for Zero-Knowledge Proof Systems. USENIX Security Symposium, 2021
Xiaojun Xu, Qi Wang, Huichen Li, Nikita Borisov, Carl A. Gunter, and Bo Li: Detecting AI Trojans Using Meta Neural Analysis. IEEE Symposium on Security and Privacy (S&P), 2021
Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong: Local Model Poisoning Attacks to Byzantine-Robust Federated Learning. USENIX Security Symposium, 2020
Vale Tolpegin, Stacey Truex, Mehmet Emre Gursoy, and Ling Liu: Data Poisoning Attacks Against Federated Learning Systems. European Symposium on Research in Computer Security (ESORICS), 2020
Jianbo Chen, Michael I. Jordan, and Martin J. Wainwright: HopSkipJumpAttack: A Query-Efficient Decision-Based Attack. IEEE Symposium on Security and Privacy (S&P), 2020
Philip Daian, Steven Goldfeder, Tyler Kell, Yunqi Li, Xueyuan Zhao, Iddo Bentov, Lorenz Breidenbach, and Ari Juels: Flash Boys 2.0: Frontrunning in Decentralized Exchanges, Miner Extractable Value, and Consensus Instability. IEEE Symposium on Security and Privacy (S&P), 2020
Pratyush Mishra, Ryan Lehmkuhl, Akshayaram Srinivasan, Wenting Zheng, and Raluca Ada Popa: Delphi: A Cryptographic Inference Service for Neural Networks. USENIX Security Symposium, 2020
James Henry Bell, Kallista A. Bonawitz, Adrià Gascón, Tancrède Lepoint, and Mariana Raykova: Secure Single-Server Aggregation with (Poly)Logarithmic Overhead. ACM Conference on Computer and Communications Security (CCS), 2020
Kit Murdock, David F. Oswald, Flavio D. Garcia, Jo Van Bulck, Daniel Gruss, and Frank Piessens: Plundervolt: Software-based Fault Injection Attacks against Intel SGX. IEEE Symposium on Security and Privacy (S&P), 2020
Dingfan Chen, Ning Yu, Yang Zhang, and Mario Fritz: GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models. ACM Conference on Computer and Communications Security (CCS), 2020
Matthew Jagielski, Nicholas Carlini, David Berthelot, Alex Kurakin, and Nicolas Papernot: High Accuracy and High Fidelity Extraction of Neural Networks. USENIX Security Symposium, 2020