Back to top

Accepted Papers

There were 34 papers accepted out of 158 submissions; resulting in an acceptance rate of 21.5%. For more details about the 3 types of papers that were accepted at SaTML 2024, check our call for papers.

Two best paper awards were announced at the conference:

  • SoK: AI Auditing: The Broken Bus on the Road to AI Accountability by Abeba Birhane (Trinity College Dublin), Ryan Steed (Carnegie Mellon University), Victor Ojewale (Brown University), Briana Vecchione (Cornell University), Inioluwa Deborah Raji (Mozilla Foundation).

  • Data Redaction from Conditional Generative Models by Zhifeng Kong (NVIDIA), Kamalika Chaudhuri (UC San Diego, University of California, San Diego).

Systematization of Knowledge (SoK) Papers

  1. SoK: AI Auditing: The Broken Bus on the Road to AI Accountability
    Abeba Birhane (Trinity College Dublin), Ryan Steed (Carnegie Mellon University), Victor Ojewale (Brown University), Briana Vecchione (Cornell University), Inioluwa Deborah Raji (Mozilla Foundation)
    OpenReview
  2. SoK: A Review of Differentially Private Linear Models For High Dimensional Data
    Amol Khanna (Booz Allen Hamilton), Edward Raff (Booz Allen Hamilton), Nathan Inkawhich (Air Force Research Laboratory)
    OpenReview
  3. SoK: Pitfalls in Evaluating Black-Box Attacks
    Fnu Suya (University of Maryland, College Park), Anshuman Suri (University of Virginia), Tingwei Zhang (Cornell University), Jingtao Hong (Columbia University), Yuan Tian (UCLA), David Evans (University of Virginia)
    OpenReview
  4. SoK: Unifying Corroborative and Contributive Attributions in Large Language Models
    Theodora Worledge (Computer Science Department, Stanford University), Judy Hanwen Shen (Stanford University), Nicole Meister (Stanford University), Caleb Winston (Computer Science Department, Stanford University), Carlos Guestrin (Stanford University)
    OpenReview

Research Papers

  1. Evaluating Superhuman Models with Consistency Checks
    Lukas Fluri (ETHZ - ETH Zurich), Daniel Paleka (Department of Computer Science, ETHZ - ETH Zurich), Florian Tramèr (ETHZ - ETH Zurich)
    OpenReview
  2. Probabilistic Dataset Reconstruction from Interpretable Models
    Julien Ferry (École Polytechnique de Montréal, Université de Montréal), Ulrich Aïvodji (École de technologie supérieure, Université du Québec), Sébastien Gambs (Université du Québec à Montréal), Marie-José Huguet (LAAS / CNRS), Mohamed Siala (LAAS / CNRS)
    OpenReview
  3. Model Reprogramming Outperforms Fine-tuning on Out-of-distribution Data in Text-Image Encoders
    Andrew Geng (University of Wisconsin, Madison), Pin-Yu Chen (International Business Machines)
    OpenReview
  4. Backdoor Attack on Un-paired Medical Image-Text Pretrained Models: A Pilot Study on MedCLIP
    Ruinan Jin (University of British Columbia), Chun-Yin Huang (University of British Columbia), Chenyu You (Yale University), Xiaoxiao Li (University of British Columbia)
    OpenReview
  5. Certifiably Robust Reinforcement Learning through Model-Based Abstract Interpretation
    Chenxi Yang (University of Texas, Austin), Greg Anderson (Reed College), Swarat Chaudhuri (University of Texas at Austin)
    OpenReview
  6. Improving Privacy-Preserving Vertical Federated Learning by Efficient Communication with ADMM
    Chulin Xie (University of Illinois, Urbana Champaign), Pin-Yu Chen (International Business Machines), Qinbin Li (University of California, Berkeley), Arash Nourian (UC Berkeley, University of California, Berkeley), Ce Zhang (University of Chicago), Bo Li (University of Illinois, Urbana Champaign)
    OpenReview
  7. REStore: Black-Box Defense against DNN Backdoors with Rare Event Simulation
    Quentin Le Roux (INRIA), Kassem Kallas (INRIA), Teddy Furon (INRIA)
    OpenReview
  8. Shake to Leak: Amplifying the Generative Privacy Risk through Fine-tuning
    Zhangheng LI (University of Texas at Austin), Junyuan Hong (University of Texas at Austin), Bo Li (University of Illinois, Urbana Champaign), Zhangyang Wang (University of Texas at Austin)
    OpenReview
  9. EdgePruner: Poisoned Edge Pruning in Graph Contrastive Learning
    Hiroya Kato (KDDI Research, Inc.), Kento Hasegawa (KDDI Research, Inc.), Seira Hidano (KDDI Research, Inc.), Kazuhide Fukushima (KDDI Research, Inc.)
    OpenReview
  10. Improved Differentially Private Regression via Gradient Boosting
    Shuai Tang (Amazon Web Services), Sergul Aydore (Amazon), Michael Kearns (University of Pennsylvania), Saeyoung Rho (Columbia University), Aaron Roth (Amazon), Yichen Wang (Amazon), Yu-Xiang Wang (UC Santa Barbara), Steven Wu (Carnegie Mellon University)
    OpenReview
  11. Differentially Private Multi-Site Treatment Effect Estimation
    Tatsuki Koga (University of California, San Diego), Kamalika Chaudhuri (UC San Diego, University of California, San Diego), David Page (Duke University)
    OpenReview
  12. Fast Certification of Vision-Language Models Using Incremental Randomized Smoothing
    Ashutosh Kumar Nirala (Iowa State University), Ameya Joshi (InstaDeep), Soumik Sarkar (Iowa State University), Chinmay Hegde (New York University)
    OpenReview
  13. Fair Federated Learning via Bounded Group Loss
    Shengyuan Hu (Carnegie Mellon University), Steven Wu (Carnegie Mellon University), Virginia Smith (Carnegie Mellon University)
    OpenReview
  14. Concentrated Differential Privacy for Bandits
    Achraf Azize (INRIA), Debabrota Basu (INRIA)
    OpenReview
  15. Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors
    Yiwei Lu (University of Waterloo), Matthew Y. R. Yang (University of Waterloo), Gautam Kamath (University of Waterloo), Yaoliang Yu (University of Waterloo)
    OpenReview
  16. PILLAR: How to make semi-private learning more effective
    Yaxi Hu (Max Planck Institute for Intelligent Systems), Francesco Pinto (University of Oxford), Fanny Yang (Swiss Federal Institute of Technology), Amartya Sanyal (Max-Planck Institute)
    OpenReview
  17. ImpNet: Imperceptible and blackbox-undetectable backdoors in compiled neural networks
    Eleanor Clifford (Imperial College London), Ilia Shumailov (Google DeepMind), Yiren Zhao (Imperial College London), Ross Anderson (University of Edinburgh, University of Edinburgh), Robert D. Mullins (University of Cambridge)
    OpenReview
  18. CodeLMSec Benchmark: Systematically Evaluating and Finding Security Vulnerabilities in Black-Box Code Language Models
    Hossein Hajipour (CISPA, saarland university, saarland informatics campus), Keno Hassler (CISPA Helmholtz Center for Information Security), Thorsten Holz (CISPA Helmholtz Center for Information Security), Lea Schönherr (CISPA Helmholtz Center for Information Security), Mario Fritz (CISPA Helmholtz Center for Information Security)
    OpenReview
  19. ScionFL: Efficient and Robust Secure Quantized Aggregation
    Yaniv Ben-Itzhak (VMware), Helen Möllering (Technical University of Darmstadt), Benny Pinkas (Bar-Ilan University), Thomas Schneider (Technische Universität Darmstadt), Ajith Suresh (Technology Innovation Institute (TII)), Oleksandr Tkachenko (Technische Universität Darmstadt), shay vargaftik (VMware Research), Christian Weinert (Royal Holloway, University of London), Hossein Yalame (TU Darmstadt), Avishay Yanai (Vmware)
    OpenReview
  20. Navigating the Structured What-If Spaces: Counterfactual Generation via Structured Diffusion
    Nishtha Madaan (Indian Institute of Technology Delhi), Srikanta J. Bedathur (Indian Institute of Technology, Delhi)
    OpenReview
  21. Data Redaction from Conditional Generative Models
    Zhifeng Kong (NVIDIA), Kamalika Chaudhuri (UC San Diego, University of California, San Diego)
    OpenReview
  22. Differentially Private Heavy Hitter Detection using Federated Analytics
    Karan Chadha (Stanford University), Hanieh Hashemi (Apple), John Duchi (Stanford University), Vitaly Feldman (Apple AI Research), Hanieh Hashemi (Apple), Omid Javidbakht (Apple), Audra McMillan (Apple), Kunal Talwar (Apple)
    OpenReview
  23. Evading Black-box Classifiers Without Breaking Eggs
    Edoardo Debenedetti (Department of Computer Science, ETHZ - ETH Zurich), Nicholas Carlini (Google), Florian Tramèr (ETHZ - ETH Zurich)
    OpenReview
  24. Estimating and Implementing Conventional Fairness Metrics With Probabilistic Protected Features
    Hadi Elzayn (Stanford University), Emily Black (Barnard College), Patrick Vossler (Stanford University), Nathanael Jo (Massachusetts Institute of Technology), JACOB GOLDIN (University of Chicago), Daniel E. Ho (Stanford University)
    OpenReview
  25. Towards Scalable and Robust Model Versioning
    Wenxin Ding (University of Chicago), Arjun Nitin Bhagoji (University of Chicago), Ben Y. Zhao (University of Chicago), Haitao Zheng (University of Chicago)
    OpenReview
  26. OLYMPIA: A Simulation Framework for Evaluating the Concrete Scalability of Secure Aggregation Protocols
    Ivoline Ngong (University of Vermont), Nicholas Gibson (University of Vermont), Joseph Near (University of Vermont)
    OpenReview
  27. Understanding, Uncovering, and Mitigating the Causes of Inference Slowdown for Language Models
    Kamala Varma (University of Maryland, College Park), Arda Numanoğlu (Middle East Technical University), Yigitcan Kaya (University of California, Santa Barbara), Tudor Dumitras (University of Maryland, College Park)
    OpenReview
  28. The Devil's Advocate: Shattering the Illusion of Unexploitable Data using Diffusion Models
    Hadi Mohaghegh Dolatabadi (University of Melbourne), Sarah Monazam Erfani (The University of Melbourne), Christopher Leckie (The University of Melbourne)
    OpenReview
  29. Under manipulations, are there AI models harder to audit?
    Augustin Godinot (Université Rennes I), Gilles Tredan (LAAS / CNRS), Erwan Le Merrer (INRIA), Camilla Penzo (PEReN - French Center of Expertise for Digital Platform Regulation), Francois Taiani (INRIA Rennes)
    OpenReview
  30. Segment (Almost) Nothing: Prompt-Agnostic Adversarial Attacks on Segmentation Models
    Francesco Croce (EPFL - EPF Lausanne), Matthias Hein (University of Tübingen)
    OpenReview

Position Papers

None this year.