Accepted Papers 2023
There were 40 papers accepted out of 152 submissions; resulting in an acceptance rate of 26.3%. For more details about the 3 types of papers that were accepted at SaTML 2023, check our call for papers.
Two best paper awards were announced at the conference:
- SoK: A Validity Perspective on Evaluating the Justified Use of Data-driven Decision-making Algorithms by Amanda Coston (Carnegie Mellon University, USA), Anna Kawakami (Carnegie Mellon University, USA), Haiyi Zhu (Carnegie Mellon University, USA), Ken Holstein (Carnegie Mellon University, USA), andHoda Heidari (Carnegie Mellon University, USA).
- Optimal Data Acquisition with Privacy-Aware Agents by Rachel Cummings (Columbia University), Hadi Elzayn (Stanford University), Emmanouil Pountourakis (Drexel University), Vasilis Gkatzelis (Drexel University), and Juba Ziani (Georgia Institute of Technology).
Conference attendees can access the proceedings here (username and password communicated at the conference).
Systematization of Knowledge (SoK) Papers
- SoK: Harnessing Prior Knowledge for Explainable Machine Learning: An OverviewKatharina Beckh (Fraunhofer IAIS, Germany), Sebastian Müller (University of Bonn, Germany), Matthias Jakobs (TU Dortmund University, Germany), Vanessa Toborek (University of Bonn, Germany), Hanxiao Tan (TU Dortmund University, Germany), Raphael Fischer (TU Dortmund University, Germany), Pascal Welke (University of Bonn, Germany), Sebastian Houben (Hochschule Bonn-Rhein-Sieg, Germany), and Laura von Rueden (Fraunhofer IAIS, Germany)
- SoK: A Validity Perspective on Evaluating the Justified Use of Data-driven Decision-making AlgorithmsAmanda Coston (Carnegie Mellon University, USA), Anna Kawakami (Carnegie Mellon University, USA), Haiyi Zhu (Carnegie Mellon University, USA), Ken Holstein (Carnegie Mellon University, USA), andHoda Heidari (Carnegie Mellon University, USA)
- SoK: Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural NetworksTilman Rauker (n/a), Anson Ho (Epoch), Stephen Casper (MIT CSAIL), and Dylan Hadfield-Menell (MIT CSAIL)
Research Papers
- Reducing Certified Regression to Certified Classification for General Poisoning AttacksZayd Hammoudeh (University of Oregon, USA) and Daniel Lowd (University of Oregon, USA)
- Theoretical Limits of Provable Security Against Model Extraction by Efficient Observational DefensesAri Karchmer (Boston University)
- Towards Transferable Unrestricted Adversarial Examples with Minimum ChangesFangcheng Liu (Peking University), Chao Zhang (Peking University), and Hongyang Zhang (University of Waterloo
- PolyKervNets: Activation-free Neural Networks For Efficient Private InferenceToluwani Aremu (Mohamed Bin Zayed Institute of Artificial Intelligence, UAE) and Karthik Nandakumar (Mohamed Bin Zayed Institute of Artificial Intelligence, UAE
- Exploiting Fairness to Enhance Sensitive Attributes ReconstructionJulien Ferry (LAAS-CNRS, Université de Toulouse, CNRS, France), Ulrich Aïvodji (Ecole de Technologie Supérieure, Canada), Sébastien Gambs (Université du Québec à Montréal, Canada), Marie-José Huguet (LAAS-CNRS, Université de Toulouse, CNRS, INSA, France), and Mohamed Siala (LAAS-CNRS, Université de Toulouse, CNRS, INSA, France)
- ExPLoit: Extracting Private Labels in Split LearningSanjay Kariyappa (Georgia Institute of Technology) and Moinuddin K Qureshi (Georgia Institute of Technology)
- Can Stochastic Gradient Langevin Dynamics Provide Differential Privacy for Deep Learning?Guy Heller (University of Bar-Ilan, Ramat Gan, Israel) and Ethan Fetaya (University of Bar-Ilan, Ramat Gan, Israel)
- A Light Recipe to Train Robust Vision TransformersEdoardo Debenedetti (ETH Zurich, Switzerland), Vikash Sehwag (Princeton University, USA), and Prateek Mittal (Princeton University, USA)
- SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative LearningHarsh Chaudhari (Northeastern University), Matthew Jagielski (Google Research), and Alina Oprea (Northeastern University)
- Explainable Global Fairness Verification of Tree-Based ClassifiersStefano Calzavara (Università Ca' Foscari Venezia, Italy), Lorenzo Cazzaro (Università Ca' Foscari Venezia, Italy), Claudio Lucchese (Università Ca' Foscari Venezia, Italy), and Federico Marcuzzi (Università Ca' Foscari Venezia, Italy)
- Endogenous Macrodynamics in Algorithmic RecoursePatrick Altmeyer (Delft University of Technology, The Netherlands), Giovan Angela (Delft University of Technology, The Netherlands), Aleksander Buszydlik (Delft University of Technology, The Netherlands), Karol Dobiczek (Delft University of Technology, The Netherlands), Arie van Deursen (Delft University of Technology, The Netherlands), and Cynthia C. S. Liem (Delft University of Technology, The Netherlands)
- No Matter How You Slice It: Machine Unlearning with SISA Comes at the Expense of Minority ClassesKorbinian Koch (Universität Hamburg, Germany) and Marcus Soll (NORDAKADEMIE gAG Hochschule der Wirtschaft, Germany)
- Optimal Data Acquisition with Privacy-Aware AgentsRachel Cummings (Columbia University), Hadi Elzayn (Stanford University), Emmanouil Pountourakis (Drexel University), Vasilis Gkatzelis (Drexel University), and Juba Ziani (Georgia Institute of Technology)
- Dissecting Distribution InferenceAnshuman Suri (University of Virginia), Yifu Lu (University of Michigan), Yanjin Chen (University of Virginia), and David Evans (University of Virginia)
- Reprogrammable-FL: Improving Utility-Privacy Tradeoff in Federated Learning via Model ReprogrammingHuzaifa Arif (Rensselaer Polytechnic Institute, USA), Alex Gittens (Rensselaer Polytechnic Institute, USA), and Pin-Yu Chen (IBM Research, USA)
- Data Redaction from Pre-trained GANsZhifeng Kong (University of California San Diego, USA) and Kamalika Chaudhuri (University of California San Diego, USA)
- Sniper Backdoor: Single Client Targeted Backdoor Attack in Federated LearningGorka Abad (Radboud University, The Netherlands; Ikerlan research centre, Spain), Servio Paguada (Radboud University, The Netherlands; Ikerlan research centre, Spain), Oguzhan Ersoy (Radboud University, The Netherlands), Stjepan Picek (Radboud University, The Netherlands), Víctor Julio Ramírez-Durán (Ikerlan research centre, Spain), and Aitor Urbieta (Ikerlan research centre, Spain)
- Backdoor Attacks on Time Series: A Generative ApproachYujing Jiang (University of Melbourne), Xingjun Ma (Fudan University), Sarah Monazam Erfani (University of Melbourne), and James Bailey (University of Melbourne)
- Kernel Normalized Convolutional Networks for Privacy-Preserving Machine LearningReza Nasirigerdeh (Technical University of Munich, Germany), Javad Torkzadehmahani (Azad University of Kerman, Iran), Daniel Rueckert (Technical University of Munich, Germany; Imperial College London, United Kingdom), and Georgios Kaissis (Technical University of Munich, Germany; Helmholtz Zentrum Munich, Germany; Imperial College London, United Kingdom)
- Publishing Efficient On-device Models Increases Adversarial VulnerabilitySanghyun Hong (Oregon State University), Nicholas Carlini (Google Brain), and Alexey Kurakin (Google Brain)
- Wealth Dynamics Over Generations: Analysis and InterventionsKrishna Acharya (Georgia Institute of Technology, USA), Eshwar Ram Arunachaleswaran (University of Pennsylvania, USA), Sampath Kannan (University of Pennsylvania, USA), Aaron Roth (University of Pennsylvania, USA), and Juba Ziani (Georgia Institute of Technology, USA)
- EDoG: Adversarial Edge Detection For Graph Neural NetworksXiaojun Xu (University of Illinois at Urbana-Champaign), Hanzhang Wang (eBay), Alok Lal (eBay), Carl Gunter (University of Illinois at Urbana-Champaign), and Bo Li (University of Illinois at Urbana-Champaign)
- Neural Lower Bounds for VerificationFlorian Jaeckle (University of Oxford, UK) and M. Pawan Kumar (University of Oxford, UK)
- Less is More: Dimension Reduction Finds On-Manifold Adversarial Examples in Hard-Label AttacksWashington Garcia (University of Florida), Pin-Yu Chen (IBM Research), Hamilton Clouse (Air Force Research Laboratory), Somesh Jha (University of Wisconsin), and Kevin Butler (University of Florida)
- Learning Fair Representations through Uniformly Distributed Sensitive AttributesPatrik Joslin Kenfack (Innopolis University, Russia), Adín Ramírez Rivera (University of Oslo, Norway), Adil Mehmood Khan (Innopolis University, Russia; University of Hull, UK), and Manuel Mazzara (Innopolis University, Russia)
- CARE: Certifiably Robust Learning with Reasoning via Variational InferenceJiawei Zhang (University of Illinois Urbana-Champaign, USA), Linyi Li (University of Illinois Urbana-Champaign, USA), Ce Zhang (ETH Zürich, Switzerland), and Bo Li (University of Illinois Urbana-Champaign, USA)
- Toward Certified Robustness Against Real-World Distribution ShiftsHaoze Wu (Stanford University, USA), Teruhiro Tagomori (Stanford University, USA; NRI Secure, Japan), Alexander Robey (University of Pennsylvania, USA), Fengjun Yang (University of Pennsylvania, USA), Nikolai Matni (University of Pennsylvania, USA), George Pappas (University of Pennsylvania, USA), Hamed Hassani (University of Pennsylvania, USA), Corina Pasareanu (Carnegie Mellon University, USA), and Clark Barrett (Stanford University, USA)
- What Are Effective Labels for Augmented Data? Improving Calibration and Robustness with AutoLabelYao Qin (Google Research, USA), Xuezhi Wang (Google Research, USA), Balaji Lakshminarayanan (Google Research, USA), Ed H. Chi (Google Research, USA), and Alex Beutel (Google Research, USA)
- VENOMAVE: Targeted Poisoning Against Speech RecognitionHojjat Aghakhani (University of California, Santa Barbara), Lea Schönherr (CISPA Helmholtz Center for Information Security), Thorsten Eisenhofer (Ruhr University Bochum), Dorothea Kolossa (Technische Universität Berlin), Thorsten Holz (CISPA Helmholtz Center for Information Security), Christopher Kruegel (University of California, Santa Barbara), and Giovanni Vigna (University of California, Santa Barbara)
- FaShapley: Fast and Approximated Shapley Based Model Pruning Towards Certifiably Robust DNNsMintong Kang (University of Illinois at Urbana-Champaign), Linyi Li (University of Illinois at Urbana-Champaign), and Bo Li (University of Illinois at Urbana-Champaign)
- Model Inversion Attack with Least Information and an In-depth Analysis of its Disparate VulnerabilitySayanton V. Dibbo (Dartmouth College), Dae Lim Chung (Dartmouth College), and Shagufta Mehnaz (The Pennsylvania State University)
- ModelPred: A Framework for Predicting Trained Model from Training DataYingyan Zeng (Virginia Tech, USA), Jiachen T. Wang (Princeton University, USA), Si Chen (Virginia Tech, USA), Hoang Anh Just (Virginia Tech, USA), Ran Jin (Virginia Tech, USA), and Ruoxi Jia (Virginia Tech, USA)
- Distribution inference risks: Identifying and mitigating sources of leakageValentin Hartmann (EPFL), Léo Meynent (EPFL), Maxime Peyrard (EPFL), Dimitrios Dimitriadis (Amazon), Shruti Tople (Microsoft Research), and Robert West (EPFL)
- Counterfactual Sentence Generation with Plug-and-Play PerturbationNishtha Madaan (IBM Research India; Indian Institute of Technology), Diptikalyan Saha (IBM Research India), and Srikanta Bedathur (Indian Institute of Technology)
- Rethinking the Entropy of Instance in Adversarial TrainingMinseon Kim (KAIST, South Korea), Jihoon Tack (KAIST, South Korea), Jinwoo Shin (KAIST, South Korea), and Sung Ju Hwang (KAIST, South Korea; AITRICS, South Korea)
Position Papers
- Position: “Real Attackers Don’t Compute Gradients”: Bridging the Gap Between Adversarial ML Research and PracticeGiovanni Apruzzese (University of Liechtenstein), Hyrum S. Anderson (Robust Intelligence), Savino Dambra (Norton Research Group), David Freeman (Meta), Fabio Pierazzi (King's College London), and Kevin A. Roundy (Norton Research Group)
- Position: Tensions Between the Proxies of Human Values in AITeresa Datta (Arthur), Daniel Nissani (Arthur), Max Cembalest (Arthur), Akash Khanna (Arthur), Haley Massa (Arthur), and John Dickerson (Arthur)