Video Recordings for SaTML 2023
Opening Remarks
Nicolas Papernot
Keynotes
Robustness in Machine Learning: A Five-Year Retrospective
Zico Kolter
Eugenics and the Promise of Utopia through Artificial General Intelligence
Timnit Gebru
Tutorials
An Introduction to Differential Privacy
Gautam Kamath
Aligning ML Systems with Human Intent
Jacob Steinhardt
Session A
Explainable Global Fairness Verification of Tree-Based ClassifiersOpenReview
Stefano Calzavara (Università Ca' Foscari Venezia, Italy), Lorenzo Cazzaro (Università Ca' Foscari Venezia, Italy), Claudio Lucchese (Università Ca' Foscari Venezia, Italy), and Federico Marcuzzi (Università Ca' Foscari Venezia, Italy)
Exploiting Fairness to Enhance Sensitive Attributes ReconstructionOpenReview
Julien Ferry (LAAS-CNRS, Université de Toulouse, CNRS, France), Ulrich Aïvodji (Ecole de Technologie Supérieure, Canada), Sébastien Gambs (Université du Québec à Montréal, Canada), Marie-José Huguet (LAAS-CNRS, Université de Toulouse, CNRS, INSA, France), and Mohamed Siala (LAAS-CNRS, Université de Toulouse, CNRS, INSA, France)
Wealth Dynamics Over Generations: Analysis and InterventionsOpenReview
Krishna Acharya (Georgia Institute of Technology, USA), Eshwar Ram Arunachaleswaran (University of Pennsylvania, USA), Sampath Kannan (University of Pennsylvania, USA), Aaron Roth (University of Pennsylvania, USA), and Juba Ziani (Georgia Institute of Technology, USA)
Learning Fair Representations Through Uniformly Distributed Sensitive AttributesOpenReview
Patrik Joslin Kenfack (Innopolis University, Russia), Adín Ramírez Rivera (University of Oslo, Norway), Adil Mehmood Khan (Innopolis University, Russia; University of Hull, UK), and Manuel Mazzara (Innopolis University, Russia)
Session B
Can Stochastic Gradient Langevin Dynamics Provide Differential Privacy for Deep Learning?OpenReview
Guy Heller (University of Bar-Ilan, Ramat Gan, Israel) and Ethan Fetaya (University of Bar-Ilan, Ramat Gan, Israel)
Kernel Normalized Convolutional Networks for Privacy-Preserving Machine LearningOpenReview
Reza Nasirigerdeh (Technical University of Munich, Germany), Javad Torkzadehmahani (Azad University of Kerman, Iran), Daniel Rueckert (Technical University of Munich, Germany; Imperial College London, United Kingdom), and Georgios Kaissis (Technical University of Munich, Germany; Helmholtz Zentrum Munich, Germany; Imperial College London, United Kingdom)
Model Inversion Attack with Least Information and an In-depth Analysis of its Disparate VulnerabilityOpenReview
Sayanton V. Dibbo (Dartmouth College), Dae Lim Chung (Dartmouth College), and Shagufta Mehnaz (The Pennsylvania State University)
Distribution inference risks: Identifying and mitigating sources of leakageOpenReview
Valentin Hartmann (EPFL), Léo Meynent (EPFL), Maxime Peyrard (EPFL), Dimitrios Dimitriadis (Amazon), Shruti Tople (Microsoft Research), and Robert West (EPFL)
Dissecting Distribution InferenceOpenReview
Anshuman Suri (University of Virginia), Yifu Lu (University of Michigan), Yanjin Chen (University of Virginia), and David Evans (University of Virginia)
Session C
ExPLoit: Extracting Private Labels in Split LearningOpenReview
Sanjay Kariyappa (Georgia Institute of Technology) and Moinuddin K Qureshi (Georgia Institute of Technology)
SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative LearningOpenReview
Harsh Chaudhari (Northeastern University), Matthew Jagielski (Google Research), and Alina Oprea (Northeastern University)
Reprogrammable-FL: Improving Utility-Privacy Tradeoff in Federated Learning via Model ReprogrammingOpenReview
Huzaifa Arif (Rensselaer Polytechnic Institute, USA), Alex Gittens (Rensselaer Polytechnic Institute, USA), and Pin-Yu Chen (IBM Research, USA)
Optimal Data Acquisition with Privacy-Aware AgentsOpenReview
Rachel Cummings (Columbia University), Hadi Elzayn (Stanford University), Emmanouil Pountourakis (Drexel University), Vasilis Gkatzelis (Drexel University), and Juba Ziani (Georgia Institute of Technology)
Session D
A Light Recipe to Train Robust Vision TransformersOpenReview
Edoardo Debenedetti (ETH Zurich, Switzerland), Vikash Sehwag (Princeton University, USA), and Prateek Mittal (Princeton University, USA)
Less is More: Dimension Reduction Finds On-Manifold Adversarial Examples in Hard-Label AttacksOpenReview
Washington Garcia (University of Florida), Pin-Yu Chen (IBM Research), Hamilton Clouse (Air Force Research Laboratory), Somesh Jha (University of Wisconsin), and Kevin Butler (University of Florida)
Publishing Efficient On-device Models Increases Adversarial VulnerabilityOpenReview
Sanghyun Hong (Oregon State University), Nicholas Carlini (Google Brain), and Alexey Kurakin (Google Brain)
EDoG: Adversarial Edge Detection For Graph Neural Networks (virtual)OpenReview
Xiaojun Xu (University of Illinois at Urbana-Champaign), Hanzhang Wang (eBay), Alok Lal (eBay), Carl Gunter (University of Illinois at Urbana-Champaign), and Bo Li (University of Illinois at Urbana-Champaign)
Counterfactual Sentence Generation with Plug-and-Play PerturbationOpenReview
Nishtha Madaan (IBM Research India; Indian Institute of Technology), Diptikalyan Saha (IBM Research India), and Srikanta Bedathur (Indian Institute of Technology)
Rethinking the Entropy of Instance in Adversarial TrainingOpenReview
Minseon Kim (KAIST, South Korea), Jihoon Tack (KAIST, South Korea), Jinwoo Shin (KAIST, South Korea), and Sung Ju Hwang (KAIST, South Korea; AITRICS, South Korea)
Towards Transferable Unrestricted Adversarial Examples with Minimum ChangesOpenReview
Fangcheng Liu (Peking University), Chao Zhang (Peking University), and Hongyang Zhang (University of Waterloo
Position: “Real Attackers Don’t Compute Gradients”: Bridging the Gap Between Adversarial ML Research and PracticeOpenReview
Giovanni Apruzzese (University of Liechtenstein), Hyrum S. Anderson (Robust Intelligence), Savino Dambra (Norton Research Group), David Freeman (Meta), Fabio Pierazzi (King's College London), and Kevin A. Roundy (Norton Research Group)
What Are Effective Labels for Augmented Data? Improving Calibration and Robustness with AutoLabelOpenReview
Yao Qin (Google Research, USA), Xuezhi Wang (Google Research, USA), Balaji Lakshminarayanan (Google Research, USA), Ed H. Chi (Google Research, USA), and Alex Beutel (Google Research, USA)
Session E
Sniper Backdoor: Single Client Targeted Backdoor Attack in Federated LearningOpenReview
Gorka Abad (Radboud University, The Netherlands; Ikerlan research centre, Spain), Servio Paguada (Radboud University, The Netherlands; Ikerlan research centre, Spain), Oguzhan Ersoy (Radboud University, The Netherlands), Stjepan Picek (Radboud University, The Netherlands), Víctor Julio Ramírez-Durán (Ikerlan research centre, Spain), and Aitor Urbieta (Ikerlan research centre, Spain)
Backdoor Attacks on Time Series: A Generative ApproachOpenReview
Yujing Jiang (University of Melbourne), Xingjun Ma (Fudan University), Sarah Monazam Erfani (University of Melbourne), and James Bailey (University of Melbourne)
VENOMAVE: Targeted Poisoning Against Speech RecognitionOpenReview
Hojjat Aghakhani (University of California, Santa Barbara), Lea Schönherr (CISPA Helmholtz Center for Information Security), Thorsten Eisenhofer (Ruhr University Bochum), Dorothea Kolossa (Technische Universität Berlin), Thorsten Holz (CISPA Helmholtz Center for Information Security), Christopher Kruegel (University of California, Santa Barbara), and Giovanni Vigna (University of California, Santa Barbara)
Session F
Endogenous Macrodynamics in Algorithic RecourseOpenReview
Patrick Altmeyer (Delft University of Technology, The Netherlands), Giovan Angela (Delft University of Technology, The Netherlands), Aleksander Buszydlik (Delft University of Technology, The Netherlands), Karol Dobiczek (Delft University of Technology, The Netherlands), Arie van Deursen (Delft University of Technology, The Netherlands), and Cynthia C. S. Liem (Delft University of Technology, The Netherlands)
ModelPred: A Framework for Predicting Trained Model from Training DataOpenReview
Yingyan Zeng (Virginia Tech, USA), Jiachen T. Wang (Princeton University, USA), Si Chen (Virginia Tech, USA), Hoang Anh Just (Virginia Tech, USA), Ran Jin (Virginia Tech, USA), and Ruoxi Jia (Virginia Tech, USA)
SoK: Harnessing Prior Knowledge for Explainable Machine Learning: An OverviewOpenReview
Katharina Beckh (Fraunhofer IAIS, Germany), Sebastian Müller (University of Bonn, Germany), Matthias Jakobs (TU Dortmund University, Germany), Vanessa Toborek (University of Bonn, Germany), Hanxiao Tan (TU Dortmund University, Germany), Raphael Fischer (TU Dortmund University, Germany), Pascal Welke (University of Bonn, Germany), Sebastian Houben (Hochschule Bonn-Rhein-Sieg, Germany), and Laura von Rueden (Fraunhofer IAIS, Germany)
SoK: Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural NetworksOpenReview
Tilman Rauker (n/a), Anson Ho (Epoch), Stephen Casper (MIT CSAIL), and Dylan Hadfield-Menell (MIT CSAIL)
Session G
Reducing Certified Regression to Certified Classification for General Poisoning AttacksOpenReview
Zayd Hammoudeh (University of Oregon, USA) and Daniel Lowd (University of Oregon, USA)
Neural Lower Bounds For VerificationOpenReview
Florian Jaeckle (University of Oxford, UK) and M. Pawan Kumar (University of Oxford, UK)
Toward Certified Robustness Against Real-World Distribution ShiftsOpenReview
Haoze Wu (Stanford University, USA), Teruhiro Tagomori (Stanford University, USA; NRI Secure, Japan), Alexander Robey (University of Pennsylvania, USA), Fengjun Yang (University of Pennsylvania, USA), Nikolai Matni (University of Pennsylvania, USA), George Pappas (University of Pennsylvania, USA), Hamed Hassani (University of Pennsylvania, USA), Corina Pasareanu (Carnegie Mellon University, USA), and Clark Barrett (Stanford University, USA)
CARE: Certifiably Robust Learning with Reasoning via Variational InferenceOpenReview
Jiawei Zhang (University of Illinois Urbana-Champaign, USA), Linyi Li (University of Illinois Urbana-Champaign, USA), Ce Zhang (ETH Zürich, Switzerland), and Bo Li (University of Illinois Urbana-Champaign, USA)
FaShapley: Fast and Approximated Shapley Based Model Pruning Towards Certifiably Robust DNNsOpenReview
Mintong Kang (University of Illinois at Urbana-Champaign), Linyi Li (University of Illinois at Urbana-Champaign), and Bo Li (University of Illinois at Urbana-Champaign)
Session H
PolyKervNets: Activation-free Neural Networks For Private InterferenceOpenReview
Toluwani Aremu (Mohamed Bin Zayed Institute of Artificial Intelligence, UAE) and Karthik Nandakumar (Mohamed Bin Zayed Institute of Artificial Intelligence, UAE
Theoretical Limits of Provable Security Against Model Extraction by Efficient Observational DefensesOpenReview
Ari Karchmer (Boston University)
No Matter How You Slice It: Machine Unlearning with SISA Comes at the Expense of Minority ClassesOpenReview
Korbinian Koch (Universität Hamburg, Germany) and Marcus Soll (NORDAKADEMIE gAG Hochschule der Wirtschaft, Germany)
Data Redaction from Pre-trained GANsOpenReview
Zhifeng Kong (University of California San Diego, USA) and Kamalika Chaudhuri (University of California San Diego, USA)
Session I
Position: Tensions Between the Proxies of Human Values in AIOpenReview
Teresa Datta (Arthur), Daniel Nissani (Arthur), Max Cembalest (Arthur), Akash Khanna (Arthur), Haley Massa (Arthur), and John Dickerson (Arthur)
SoK: A Validity Perspective on Evaluating the Justified Use of Data-driven Decision-making AlgorithmsOpenReview
Amanda Coston (Carnegie Mellon University, USA), Anna Kawakami (Carnegie Mellon University, USA), Haiyi Zhu (Carnegie Mellon University, USA), Ken Holstein (Carnegie Mellon University, USA), andHoda Heidari (Carnegie Mellon University, USA)
Competitions
Improving training data extraction attacks on large language modelsWebsite
organized by Nicholas Carlini, Christopher Choquette-Choo, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Milad Nasr, Florian Tramer, Chiyuan Zhang.
Closing Remarks
Nicolas Papernot