Back to top

Accepted Papers

This list is tentative as some papers are still ongoing major revisions. If accepted, they will be added to the list here. For more details about the 3 types of papers that were accepted at SaTML 2023, check our call for papers.

Research Papers

  1. Reducing Certified Regression to Certified Classification for General Poisoning Attacks
    Zayd Hammoudeh, Daniel Lowd
    OpenReview
  2. Exploiting Fairness to Enhance Sensitive Attributes Reconstruction
    Julien Ferry, Ulrich Aïvodji, Sébastien Gambs, Marie-José Huguet, Mohamed Siala
    OpenReview
  3. ExPLoit: Extracting Private Labels in Split Learning
    Sanjay Kariyappa, Moinuddin K Qureshi
    OpenReview
  4. Can Stochastic Gradient Langevin Dynamics Provide Differential Privacy for Deep Learning?
    Guy Heller, Ethan Fetaya
    OpenReview
  5. A Light Recipe to Train Robust Vision Transformers
    Edoardo Debenedetti, Vikash Sehwag, Prateek Mittal
    OpenReview
  6. SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning
    Harsh Chaudhari, Matthew Jagielski, Alina Oprea
    OpenReview
  7. Explainable Global Fairness Verification of Tree-Based Classifiers
    Stefano Calzavara, Lorenzo Cazzaro, Claudio Lucchese, Federico Marcuzzi
    OpenReview
  8. Endogenous Macrodynamics in Algorithmic Recourse
    Patrick Altmeyer, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, Cynthia Liem
    OpenReview
  9. Optimal Data Acquisition with Privacy-Aware Agents
    Rachel Cummings, Hadi Elzayn, Vasilis Gkatzelis, Emmanouil Pountorakis, Juba Ziani
    OpenReview
  10. Reprogrammable-FL: Improving Utility-Privacy Tradeoff in Federated Learning via Model Reprogramming
    Huzaifa Arif, Alex Gittens, Pin-Yu Chen
    OpenReview
  11. Sniper Backdoor: Single Client Targeted Backdoor Attack in Federated Learning
    Gorka Abad, Servio Paguada, Oguzhan Ersoy, Stjepan Picek, Victor Julio Ramírez-Durán, Aitor Urbieta
    OpenReview
  12. Backdoor Attacks on Time Series: A Generative Approach
    Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, James Bailey
    OpenReview
  13. Kernel Normalized Convolutional Networks for Privacy-Preserving Machine Learning
    Reza Nasirigerdeh, Javad Torkzadehmahani, Daniel Rueckert, Georgios Kaissis
    OpenReview
  14. Publishing Efficient On-device Models Increases Adversarial Vulnerability
    Sanghyun Hong, Nicholas Carlini, Alexey Kurakin
    OpenReview
  15. EDoG: Adversarial Edge Detection For Graph Neural Networks
    Xiaojun Xu, Hanzhang Wang, Alok Lal, Carl A. Gunter, Bo Li
    OpenReview
  16. NEURAL LOWER BOUNDS FOR VERIFICATION
    Florian Jaeckle, M. Pawan Kumar
    OpenReview
  17. Less is More: Dimension Reduction Finds On-Manifold Adversarial Examples in Hard-Label Attacks
    Washington Garcia, Pin-Yu Chen, Hamilton Scott Clouse, Somesh Jha, Kevin R. B. Butler
    OpenReview
  18. CARE: Certifiably Robust Learning with Reasoning via Variational Inference
    Jiawei Zhang, Linyi Li, Ce Zhang, Bo Li
    OpenReview
  19. Toward Certified Robustness Against Real-World Distribution Shifts
    Haoze Wu, Teruhiro Tagomori, Alexander Robey, Fengjun Yang, Nikolai Matni, Hamed Hassani, George J. Pappas, Corina Pasareanu, Clark Barrett
    OpenReview
  20. What Are Effective Labels for Augmented Data? Improving Calibration and Robustness with AutoLabel
    Yao Qin, Xuezhi Wang, Balaji Lakshminarayanan, Ed H. Chi, Alex Beutel
    OpenReview
  21. VENOMAVE: Targeted Poisoning Against Speech Recognition
    Hojjat Aghakhani, Lea Schönherr, Thorsten Eisenhofer, Dorothea Kolossa, Thorsten Holz, Christopher Kruegel, Giovanni Vigna
    OpenReview
  22. FaShapley: Fast and Approximated Shapley Based Model Pruning Towards Certifiably Robust DNNs
    Mintong Kang, Linyi Li, Bo Li
    OpenReview
  23. Model Inversion Attack with Least Information and an In-depth Analysis of its Disparate Vulnerability
    Sayanton V. Dibbo, Dae Lim Chung, Shagufta Mehnaz
    OpenReview
  24. Data2Model: Predicting Models from Training Data
    Yingyan Zeng, Tianhao Wang, Si Chen, Hoang Anh Just, Ran Jin, Ruoxi Jia
    OpenReview
  25. Distribution inference risks: Identifying and mitigating sources of leakage
    Valentin Hartmann, Léo Meynent, Maxime Peyrard, Dimitrios Dimitriadis, Shruti Tople, Robert West
    OpenReview
  26. Counterfactual Sentence Generation with Plug-and-Play Perturbation
    Nishtha Madaan, Diptikalyan Saha, Srikanta J. Bedathur
    OpenReview
  27. Rethinking the Entropy of Instance in Adversarial Training
    Minseon Kim, Jihoon Tack, Jinwoo Shin, Sung Ju Hwang
    OpenReview

Systematization of Knowledge (SoK) Papers

  1. SoK: Harnessing Prior Knowledge for Explainable Machine Learning: An Overview
    Katharina Beckh, Sebastian Müller, Matthias Jakobs, Vanessa Toborek, Hanxiao Tan, Raphael Fischer, Pascal Welke, Sebastian Houben, Laura von Rueden
    OpenReview
  2. SoK: A Validity Perspective on Evaluating the Justified Use of Data-driven Decision-making Algorithms
    Amanda Lee Coston, Anna Kawakami, Haiyi Zhu, Ken Holstein, Hoda Heidari
    OpenReview
  3. SoK: Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks
    Stephen Casper, Tilman Rauker, Anson Ho, Dylan Hadfield-Menell
    OpenReview

Position Papers

  1. Position: “Real Attackers Don’t Compute Gradients”: Bridging the Gap Between Adversarial ML Research and Practice
    Giovanni Apruzzese, Hyrum S Anderson, Savino Dambra, David Freeman, Fabio Pierazzi, Kevin Alejandro Roundy
    OpenReview
  2. Position: Tensions Between the Proxies of Human Values in AI
    Teresa Datta, Daniel Nissani, Max Cembalest, Akash Khanna, Haley Massa, John P Dickerson
    OpenReview