Accepted Papers
There were 34 papers accepted out of 158 submissions; resulting in an acceptance rate of 21.5%. For more details about the 3 types of papers that were accepted at SaTML 2024, check our call for papers.
Two best paper awards were announced at the conference:
-
SoK: AI Auditing: The Broken Bus on the Road to AI Accountability by Abeba Birhane (Trinity College Dublin), Ryan Steed (Carnegie Mellon University), Victor Ojewale (Brown University), Briana Vecchione (Cornell University), Inioluwa Deborah Raji (Mozilla Foundation).
-
Data Redaction from Conditional Generative Models by Zhifeng Kong (NVIDIA), Kamalika Chaudhuri (UC San Diego, University of California, San Diego).
Systematization of Knowledge (SoK) Papers
-
SoK: AI Auditing: The Broken Bus on the Road to AI AccountabilityAbeba Birhane (Trinity College Dublin), Ryan Steed (Carnegie Mellon University), Victor Ojewale (Brown University), Briana Vecchione (Cornell University), Inioluwa Deborah Raji (Mozilla Foundation)
-
SoK: A Review of Differentially Private Linear Models For High Dimensional DataAmol Khanna (Booz Allen Hamilton), Edward Raff (Booz Allen Hamilton), Nathan Inkawhich (Air Force Research Laboratory)
-
SoK: Pitfalls in Evaluating Black-Box AttacksFnu Suya (University of Maryland, College Park), Anshuman Suri (University of Virginia), Tingwei Zhang (Cornell University), Jingtao Hong (Columbia University), Yuan Tian (UCLA), David Evans (University of Virginia)
-
SoK: Unifying Corroborative and Contributive Attributions in Large Language ModelsTheodora Worledge (Computer Science Department, Stanford University), Judy Hanwen Shen (Stanford University), Nicole Meister (Stanford University), Caleb Winston (Computer Science Department, Stanford University), Carlos Guestrin (Stanford University)
Research Papers
-
Evaluating Superhuman Models with Consistency ChecksLukas Fluri (ETHZ - ETH Zurich), Daniel Paleka (Department of Computer Science, ETHZ - ETH Zurich), Florian Tramèr (ETHZ - ETH Zurich)
-
Probabilistic Dataset Reconstruction from Interpretable ModelsJulien Ferry (École Polytechnique de Montréal, Université de Montréal), Ulrich Aïvodji (École de technologie supérieure, Université du Québec), Sébastien Gambs (Université du Québec à Montréal), Marie-José Huguet (LAAS / CNRS), Mohamed Siala (LAAS / CNRS)
-
Model Reprogramming Outperforms Fine-tuning on Out-of-distribution Data in Text-Image EncodersAndrew Geng (University of Wisconsin, Madison), Pin-Yu Chen (International Business Machines)
-
Backdoor Attack on Un-paired Medical Image-Text Pretrained Models: A Pilot Study on MedCLIPRuinan Jin (University of British Columbia), Chun-Yin Huang (University of British Columbia), Chenyu You (Yale University), Xiaoxiao Li (University of British Columbia)
-
Certifiably Robust Reinforcement Learning through Model-Based Abstract InterpretationChenxi Yang (University of Texas, Austin), Greg Anderson (Reed College), Swarat Chaudhuri (University of Texas at Austin)
-
Improving Privacy-Preserving Vertical Federated Learning by Efficient Communication with ADMMChulin Xie (University of Illinois, Urbana Champaign), Pin-Yu Chen (International Business Machines), Qinbin Li (University of California, Berkeley), Arash Nourian (UC Berkeley, University of California, Berkeley), Ce Zhang (University of Chicago), Bo Li (University of Illinois, Urbana Champaign)
-
REStore: Black-Box Defense against DNN Backdoors with Rare Event SimulationQuentin Le Roux (INRIA), Kassem Kallas (INRIA), Teddy Furon (INRIA)
-
Shake to Leak: Amplifying the Generative Privacy Risk through Fine-tuningZhangheng LI (University of Texas at Austin), Junyuan Hong (University of Texas at Austin), Bo Li (University of Illinois, Urbana Champaign), Zhangyang Wang (University of Texas at Austin)
-
EdgePruner: Poisoned Edge Pruning in Graph Contrastive LearningHiroya Kato (KDDI Research, Inc.), Kento Hasegawa (KDDI Research, Inc.), Seira Hidano (KDDI Research, Inc.), Kazuhide Fukushima (KDDI Research, Inc.)
-
Improved Differentially Private Regression via Gradient BoostingShuai Tang (Amazon Web Services), Sergul Aydore (Amazon), Michael Kearns (University of Pennsylvania), Saeyoung Rho (Columbia University), Aaron Roth (Amazon), Yichen Wang (Amazon), Yu-Xiang Wang (UC Santa Barbara), Steven Wu (Carnegie Mellon University)
-
Differentially Private Multi-Site Treatment Effect EstimationTatsuki Koga (University of California, San Diego), Kamalika Chaudhuri (UC San Diego, University of California, San Diego), David Page (Duke University)
-
Fast Certification of Vision-Language Models Using Incremental Randomized SmoothingAshutosh Kumar Nirala (Iowa State University), Ameya Joshi (InstaDeep), Soumik Sarkar (Iowa State University), Chinmay Hegde (New York University)
-
Fair Federated Learning via Bounded Group LossShengyuan Hu (Carnegie Mellon University), Steven Wu (Carnegie Mellon University), Virginia Smith (Carnegie Mellon University)
-
Concentrated Differential Privacy for BanditsAchraf Azize (INRIA), Debabrota Basu (INRIA)
-
Indiscriminate Data Poisoning Attacks on Pre-trained Feature ExtractorsYiwei Lu (University of Waterloo), Matthew Y. R. Yang (University of Waterloo), Gautam Kamath (University of Waterloo), Yaoliang Yu (University of Waterloo)
-
PILLAR: How to make semi-private learning more effectiveYaxi Hu (Max Planck Institute for Intelligent Systems), Francesco Pinto (University of Oxford), Fanny Yang (Swiss Federal Institute of Technology), Amartya Sanyal (Max-Planck Institute)
-
ImpNet: Imperceptible and blackbox-undetectable backdoors in compiled neural networksEleanor Clifford (Imperial College London), Ilia Shumailov (Google DeepMind), Yiren Zhao (Imperial College London), Ross Anderson (University of Edinburgh, University of Edinburgh), Robert D. Mullins (University of Cambridge)
-
CodeLMSec Benchmark: Systematically Evaluating and Finding Security Vulnerabilities in Black-Box Code Language ModelsHossein Hajipour (CISPA, saarland university, saarland informatics campus), Keno Hassler (CISPA Helmholtz Center for Information Security), Thorsten Holz (CISPA Helmholtz Center for Information Security), Lea Schönherr (CISPA Helmholtz Center for Information Security), Mario Fritz (CISPA Helmholtz Center for Information Security)
-
ScionFL: Efficient and Robust Secure Quantized AggregationYaniv Ben-Itzhak (VMware), Helen Möllering (Technical University of Darmstadt), Benny Pinkas (Bar-Ilan University), Thomas Schneider (Technische Universität Darmstadt), Ajith Suresh (Technology Innovation Institute (TII)), Oleksandr Tkachenko (Technische Universität Darmstadt), shay vargaftik (VMware Research), Christian Weinert (Royal Holloway, University of London), Hossein Yalame (TU Darmstadt), Avishay Yanai (Vmware)
-
Navigating the Structured What-If Spaces: Counterfactual Generation via Structured DiffusionNishtha Madaan (Indian Institute of Technology Delhi), Srikanta J. Bedathur (Indian Institute of Technology, Delhi)
-
Data Redaction from Conditional Generative ModelsZhifeng Kong (NVIDIA), Kamalika Chaudhuri (UC San Diego, University of California, San Diego)
-
Differentially Private Heavy Hitter Detection using Federated AnalyticsKaran Chadha (Stanford University), Hanieh Hashemi (Apple), John Duchi (Stanford University), Vitaly Feldman (Apple AI Research), Hanieh Hashemi (Apple), Omid Javidbakht (Apple), Audra McMillan (Apple), Kunal Talwar (Apple)
-
Evading Black-box Classifiers Without Breaking EggsEdoardo Debenedetti (Department of Computer Science, ETHZ - ETH Zurich), Nicholas Carlini (Google), Florian Tramèr (ETHZ - ETH Zurich)
-
Estimating and Implementing Conventional Fairness Metrics With Probabilistic Protected FeaturesHadi Elzayn (Stanford University), Emily Black (Barnard College), Patrick Vossler (Stanford University), Nathanael Jo (Massachusetts Institute of Technology), JACOB GOLDIN (University of Chicago), Daniel E. Ho (Stanford University)
-
Towards Scalable and Robust Model VersioningWenxin Ding (University of Chicago), Arjun Nitin Bhagoji (University of Chicago), Ben Y. Zhao (University of Chicago), Haitao Zheng (University of Chicago)
-
OLYMPIA: A Simulation Framework for Evaluating the Concrete Scalability of Secure Aggregation ProtocolsIvoline Ngong (University of Vermont), Nicholas Gibson (University of Vermont), Joseph Near (University of Vermont)
-
Understanding, Uncovering, and Mitigating the Causes of Inference Slowdown for Language ModelsKamala Varma (University of Maryland, College Park), Arda Numanoğlu (Middle East Technical University), Yigitcan Kaya (University of California, Santa Barbara), Tudor Dumitras (University of Maryland, College Park)
-
The Devil's Advocate: Shattering the Illusion of Unexploitable Data using Diffusion ModelsHadi Mohaghegh Dolatabadi (University of Melbourne), Sarah Monazam Erfani (The University of Melbourne), Christopher Leckie (The University of Melbourne)
-
Under manipulations, are there AI models harder to audit?Augustin Godinot (Université Rennes I), Gilles Tredan (LAAS / CNRS), Erwan Le Merrer (INRIA), Camilla Penzo (PEReN - French Center of Expertise for Digital Platform Regulation), Francois Taiani (INRIA Rennes)
-
Segment (Almost) Nothing: Prompt-Agnostic Adversarial Attacks on Segmentation ModelsFrancesco Croce (EPFL - EPF Lausanne), Matthias Hein (University of Tübingen)
Position Papers
None this year.