Back to top

Accepted Papers 2023

There were 40 papers accepted out of 152 submissions; resulting in an acceptance rate of 26.3%. For more details about the 3 types of papers that were accepted at SaTML 2023, check our call for papers.

Two best paper awards were announced at the conference:

  • SoK: A Validity Perspective on Evaluating the Justified Use of Data-driven Decision-making Algorithms by Amanda Coston (Carnegie Mellon University, USA), Anna Kawakami (Carnegie Mellon University, USA), Haiyi Zhu (Carnegie Mellon University, USA), Ken Holstein (Carnegie Mellon University, USA), andHoda Heidari (Carnegie Mellon University, USA).
  • Optimal Data Acquisition with Privacy-Aware Agents by Rachel Cummings (Columbia University), Hadi Elzayn (Stanford University), Emmanouil Pountourakis (Drexel University), Vasilis Gkatzelis (Drexel University), and Juba Ziani (Georgia Institute of Technology).

Conference attendees can access the proceedings here (username and password communicated at the conference).

Systematization of Knowledge (SoK) Papers

  1. SoK: Harnessing Prior Knowledge for Explainable Machine Learning: An Overview
    Katharina Beckh (Fraunhofer IAIS, Germany), Sebastian Müller (University of Bonn, Germany), Matthias Jakobs (TU Dortmund University, Germany), Vanessa Toborek (University of Bonn, Germany), Hanxiao Tan (TU Dortmund University, Germany), Raphael Fischer (TU Dortmund University, Germany), Pascal Welke (University of Bonn, Germany), Sebastian Houben (Hochschule Bonn-Rhein-Sieg, Germany), and Laura von Rueden (Fraunhofer IAIS, Germany)
    OpenReview
  2. SoK: A Validity Perspective on Evaluating the Justified Use of Data-driven Decision-making Algorithms
    Amanda Coston (Carnegie Mellon University, USA), Anna Kawakami (Carnegie Mellon University, USA), Haiyi Zhu (Carnegie Mellon University, USA), Ken Holstein (Carnegie Mellon University, USA), andHoda Heidari (Carnegie Mellon University, USA)
    OpenReview
  3. SoK: Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks
    Tilman Rauker (n/a), Anson Ho (Epoch), Stephen Casper (MIT CSAIL), and Dylan Hadfield-Menell (MIT CSAIL)
    OpenReview

Research Papers

  1. Reducing Certified Regression to Certified Classification for General Poisoning Attacks
    Zayd Hammoudeh (University of Oregon, USA) and Daniel Lowd (University of Oregon, USA)
    OpenReview
  2. Theoretical Limits of Provable Security Against Model Extraction by Efficient Observational Defenses
    Ari Karchmer (Boston University)
    OpenReview
  3. Towards Transferable Unrestricted Adversarial Examples with Minimum Changes
    Fangcheng Liu (Peking University), Chao Zhang (Peking University), and Hongyang Zhang (University of Waterloo
    OpenReview
  4. PolyKervNets: Activation-free Neural Networks For Efficient Private Inference
    Toluwani Aremu (Mohamed Bin Zayed Institute of Artificial Intelligence, UAE) and Karthik Nandakumar (Mohamed Bin Zayed Institute of Artificial Intelligence, UAE
    OpenReview
  5. Exploiting Fairness to Enhance Sensitive Attributes Reconstruction
    Julien Ferry (LAAS-CNRS, Université de Toulouse, CNRS, France), Ulrich Aïvodji (Ecole de Technologie Supérieure, Canada), Sébastien Gambs (Université du Québec à Montréal, Canada), Marie-José Huguet (LAAS-CNRS, Université de Toulouse, CNRS, INSA, France), and Mohamed Siala (LAAS-CNRS, Université de Toulouse, CNRS, INSA, France)
    OpenReview
  6. ExPLoit: Extracting Private Labels in Split Learning
    Sanjay Kariyappa (Georgia Institute of Technology) and Moinuddin K Qureshi (Georgia Institute of Technology)
    OpenReview
  7. Can Stochastic Gradient Langevin Dynamics Provide Differential Privacy for Deep Learning?
    Guy Heller (University of Bar-Ilan, Ramat Gan, Israel) and Ethan Fetaya (University of Bar-Ilan, Ramat Gan, Israel)
    OpenReview
  8. A Light Recipe to Train Robust Vision Transformers
    Edoardo Debenedetti (ETH Zurich, Switzerland), Vikash Sehwag (Princeton University, USA), and Prateek Mittal (Princeton University, USA)
    OpenReview
  9. SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning
    Harsh Chaudhari (Northeastern University), Matthew Jagielski (Google Research), and Alina Oprea (Northeastern University)
    OpenReview
  10. Explainable Global Fairness Verification of Tree-Based Classifiers
    Stefano Calzavara (Università Ca' Foscari Venezia, Italy), Lorenzo Cazzaro (Università Ca' Foscari Venezia, Italy), Claudio Lucchese (Università Ca' Foscari Venezia, Italy), and Federico Marcuzzi (Università Ca' Foscari Venezia, Italy)
    OpenReview
  11. Endogenous Macrodynamics in Algorithmic Recourse
    Patrick Altmeyer (Delft University of Technology, The Netherlands), Giovan Angela (Delft University of Technology, The Netherlands), Aleksander Buszydlik (Delft University of Technology, The Netherlands), Karol Dobiczek (Delft University of Technology, The Netherlands), Arie van Deursen (Delft University of Technology, The Netherlands), and Cynthia C. S. Liem (Delft University of Technology, The Netherlands)
    OpenReview
  12. No Matter How You Slice It: Machine Unlearning with SISA Comes at the Expense of Minority Classes
    Korbinian Koch (Universität Hamburg, Germany) and Marcus Soll (NORDAKADEMIE gAG Hochschule der Wirtschaft, Germany)
    OpenReview
  13. Optimal Data Acquisition with Privacy-Aware Agents
    Rachel Cummings (Columbia University), Hadi Elzayn (Stanford University), Emmanouil Pountourakis (Drexel University), Vasilis Gkatzelis (Drexel University), and Juba Ziani (Georgia Institute of Technology)
    OpenReview
  14. Dissecting Distribution Inference
    Anshuman Suri (University of Virginia), Yifu Lu (University of Michigan), Yanjin Chen (University of Virginia), and David Evans (University of Virginia)
    OpenReview
  15. Reprogrammable-FL: Improving Utility-Privacy Tradeoff in Federated Learning via Model Reprogramming
    Huzaifa Arif (Rensselaer Polytechnic Institute, USA), Alex Gittens (Rensselaer Polytechnic Institute, USA), and Pin-Yu Chen (IBM Research, USA)
    OpenReview
  16. Data Redaction from Pre-trained GANs
    Zhifeng Kong (University of California San Diego, USA) and Kamalika Chaudhuri (University of California San Diego, USA)
    OpenReview
  17. Sniper Backdoor: Single Client Targeted Backdoor Attack in Federated Learning
    Gorka Abad (Radboud University, The Netherlands; Ikerlan research centre, Spain), Servio Paguada (Radboud University, The Netherlands; Ikerlan research centre, Spain), Oguzhan Ersoy (Radboud University, The Netherlands), Stjepan Picek (Radboud University, The Netherlands), Víctor Julio Ramírez-Durán (Ikerlan research centre, Spain), and Aitor Urbieta (Ikerlan research centre, Spain)
    OpenReview
  18. Backdoor Attacks on Time Series: A Generative Approach
    Yujing Jiang (University of Melbourne), Xingjun Ma (Fudan University), Sarah Monazam Erfani (University of Melbourne), and James Bailey (University of Melbourne)
    OpenReview
  19. Kernel Normalized Convolutional Networks for Privacy-Preserving Machine Learning
    Reza Nasirigerdeh (Technical University of Munich, Germany), Javad Torkzadehmahani (Azad University of Kerman, Iran), Daniel Rueckert (Technical University of Munich, Germany; Imperial College London, United Kingdom), and Georgios Kaissis (Technical University of Munich, Germany; Helmholtz Zentrum Munich, Germany; Imperial College London, United Kingdom)
    OpenReview
  20. Publishing Efficient On-device Models Increases Adversarial Vulnerability
    Sanghyun Hong (Oregon State University), Nicholas Carlini (Google Brain), and Alexey Kurakin (Google Brain)
    OpenReview
  21. Wealth Dynamics Over Generations: Analysis and Interventions
    Krishna Acharya (Georgia Institute of Technology, USA), Eshwar Ram Arunachaleswaran (University of Pennsylvania, USA), Sampath Kannan (University of Pennsylvania, USA), Aaron Roth (University of Pennsylvania, USA), and Juba Ziani (Georgia Institute of Technology, USA)
    OpenReview
  22. EDoG: Adversarial Edge Detection For Graph Neural Networks
    Xiaojun Xu (University of Illinois at Urbana-Champaign), Hanzhang Wang (eBay), Alok Lal (eBay), Carl Gunter (University of Illinois at Urbana-Champaign), and Bo Li (University of Illinois at Urbana-Champaign)
    OpenReview
  23. Neural Lower Bounds for Verification
    Florian Jaeckle (University of Oxford, UK) and M. Pawan Kumar (University of Oxford, UK)
    OpenReview
  24. Less is More: Dimension Reduction Finds On-Manifold Adversarial Examples in Hard-Label Attacks
    Washington Garcia (University of Florida), Pin-Yu Chen (IBM Research), Hamilton Clouse (Air Force Research Laboratory), Somesh Jha (University of Wisconsin), and Kevin Butler (University of Florida)
    OpenReview
  25. Learning Fair Representations through Uniformly Distributed Sensitive Attributes
    Patrik Joslin Kenfack (Innopolis University, Russia), Adín Ramírez Rivera (University of Oslo, Norway), Adil Mehmood Khan (Innopolis University, Russia; University of Hull, UK), and Manuel Mazzara (Innopolis University, Russia)
    OpenReview
  26. CARE: Certifiably Robust Learning with Reasoning via Variational Inference
    Jiawei Zhang (University of Illinois Urbana-Champaign, USA), Linyi Li (University of Illinois Urbana-Champaign, USA), Ce Zhang (ETH Zürich, Switzerland), and Bo Li (University of Illinois Urbana-Champaign, USA)
    OpenReview
  27. Toward Certified Robustness Against Real-World Distribution Shifts
    Haoze Wu (Stanford University, USA), Teruhiro Tagomori (Stanford University, USA; NRI Secure, Japan), Alexander Robey (University of Pennsylvania, USA), Fengjun Yang (University of Pennsylvania, USA), Nikolai Matni (University of Pennsylvania, USA), George Pappas (University of Pennsylvania, USA), Hamed Hassani (University of Pennsylvania, USA), Corina Pasareanu (Carnegie Mellon University, USA), and Clark Barrett (Stanford University, USA)
    OpenReview
  28. What Are Effective Labels for Augmented Data? Improving Calibration and Robustness with AutoLabel
    Yao Qin (Google Research, USA), Xuezhi Wang (Google Research, USA), Balaji Lakshminarayanan (Google Research, USA), Ed H. Chi (Google Research, USA), and Alex Beutel (Google Research, USA)
    OpenReview
  29. VENOMAVE: Targeted Poisoning Against Speech Recognition
    Hojjat Aghakhani (University of California, Santa Barbara), Lea Schönherr (CISPA Helmholtz Center for Information Security), Thorsten Eisenhofer (Ruhr University Bochum), Dorothea Kolossa (Technische Universität Berlin), Thorsten Holz (CISPA Helmholtz Center for Information Security), Christopher Kruegel (University of California, Santa Barbara), and Giovanni Vigna (University of California, Santa Barbara)
    OpenReview
  30. FaShapley: Fast and Approximated Shapley Based Model Pruning Towards Certifiably Robust DNNs
    Mintong Kang (University of Illinois at Urbana-Champaign), Linyi Li (University of Illinois at Urbana-Champaign), and Bo Li (University of Illinois at Urbana-Champaign)
    OpenReview
  31. Model Inversion Attack with Least Information and an In-depth Analysis of its Disparate Vulnerability
    Sayanton V. Dibbo (Dartmouth College), Dae Lim Chung (Dartmouth College), and Shagufta Mehnaz (The Pennsylvania State University)
    OpenReview
  32. ModelPred: A Framework for Predicting Trained Model from Training Data
    Yingyan Zeng (Virginia Tech, USA), Jiachen T. Wang (Princeton University, USA), Si Chen (Virginia Tech, USA), Hoang Anh Just (Virginia Tech, USA), Ran Jin (Virginia Tech, USA), and Ruoxi Jia (Virginia Tech, USA)
    OpenReview
  33. Distribution inference risks: Identifying and mitigating sources of leakage
    Valentin Hartmann (EPFL), Léo Meynent (EPFL), Maxime Peyrard (EPFL), Dimitrios Dimitriadis (Amazon), Shruti Tople (Microsoft Research), and Robert West (EPFL)
    OpenReview
  34. Counterfactual Sentence Generation with Plug-and-Play Perturbation
    Nishtha Madaan (IBM Research India; Indian Institute of Technology), Diptikalyan Saha (IBM Research India), and Srikanta Bedathur (Indian Institute of Technology)
    OpenReview
  35. Rethinking the Entropy of Instance in Adversarial Training
    Minseon Kim (KAIST, South Korea), Jihoon Tack (KAIST, South Korea), Jinwoo Shin (KAIST, South Korea), and Sung Ju Hwang (KAIST, South Korea; AITRICS, South Korea)
    OpenReview

Position Papers

  1. Position: “Real Attackers Don’t Compute Gradients”: Bridging the Gap Between Adversarial ML Research and Practice
    Giovanni Apruzzese (University of Liechtenstein), Hyrum S. Anderson (Robust Intelligence), Savino Dambra (Norton Research Group), David Freeman (Meta), Fabio Pierazzi (King's College London), and Kevin A. Roundy (Norton Research Group)
    OpenReview
  2. Position: Tensions Between the Proxies of Human Values in AI
    Teresa Datta (Arthur), Daniel Nissani (Arthur), Max Cembalest (Arthur), Akash Khanna (Arthur), Haley Massa (Arthur), and John Dickerson (Arthur)
    OpenReview