Keynote Talks

Monika Henzinger

Keynote 1:
The Continual Challenge: Differential Privacy for Evolving Datasets

Monika Henzinger, Institute of Science and Technology Austria
Abstract

In an era of pervasive and continuous data collection, protecting individual privacy in dynamic environments has become increasingly critical. This talk focuses on differential privacy in the continual observation setting, where data evolves over time and new outputs are released after each update. Unlike static settings, continual observation introduces unique challenges, including cumulative privacy loss and the risk of adversarial inference from correlated outputs. We will introduce the core principles of differential privacy and examine how they extend to streaming scenarios, including new composition theorems for the continual setting and highlighting key techniques and trade-offs involved in maintaining privacy over time.

Speaker Bio

Monika Henzinger is a professor of Computer Science and the Vice President of Technology Transfer at the Institute of Science and Technology Austria (ISTA). She holds a PhD in Computer Science from Princeton University, was an assistant professor at Cornell University, a member of technical staff at the DEC Systems Research Center, the director of research at Google, and a professor of computer science at EPFL and at the University of Vienna. Monika is an ACM and EATCS Fellow and a member of the Austrian Academy of Sciences and the German National Academy of Sciences (Leopoldina). She has received an honorary doctorate from the Technical University of Dortmund, two Advanced Grants of the European Research Council, the Carus Medal of the Leopoldina, and the Wittgenstein Award of the Austrian Science Fund.

Moritz Hardt

Keynote 2:
Embracing the Tyranny of Testing

Moritz Hardt, Max Planck Institute for Intelligent Systems
Abstract

We all remember cramming for a test, scrambling to prepare in the final stretch by specifically targeting what we knew would be covered. When a benchmark catches on, it incentivizes model builders to do the engineering equivalent of cramming for the test, preparing models to excel on the specific benchmark. Although not a form of cheating, this potent practice of training on the test task confounds model comparisons and threatens benchmark validity.

But what if the problem also charted a path forward? If adapting to the test distribution is so effective, why not push the logic to the extreme: adapt the model to each test instance? Doing so is, in fact, the essence of test-time training, an evolving conceptual toolkit to improve models at test-time by turning each test instance into its own learning problem.

In this talk, I’ll speculate about a connection between the problem of training on the test task and test-time training. The natural limit point for both is instance-optimal adaptation, pointing at a confluence between model testing and model training: As test-time compute budgets grow, the line between training and testing diminishes. I’ll conclude by imagining a future of machine learning that fully embraces the tyranny of testing.

Speaker Bio

Moritz Hardt is a director at the Max Planck Institute for Intelligent Systems and an honorary professor at the University of Tübingen. Previously, he was Associate Professor with tenure in Electrical Engineering and Computer Sciences at the University of California, Berkeley. Hardt’s research contributes to the scientific foundations of machine learning in the social world. His upcoming book “The Emerging Science of Machine Learning Benchmarks” will appear with Princeton University Press in 2026. He also co-authored the textbooks ”Patterns, Predictions, and Actions: Foundations of Machine Learning” (Princeton) and ”Fairness and Machine Learning: Limitations and Opportunities” (MIT Press).

Ashia Wilson

Keynote 3:
Governing Open-Weight Generative Models

Ashia Wilson, Massachusetts Institute of Technology
Abstract

The rapid diffusion of open-weight generative models has transformed creative practice but has also introduced new security risks, including large-scale misuse and the proliferation of illegal content such as non-consensual intimate imagery (NCII) and child sexual abuse material (CSAM). As generative systems become increasingly modular and decentralized, harmful capabilities often arise not from base models themselves but from lightweight fine-tuning and recombination strategies that are easy to distribute, difficult to trace, and hard to audit. This creates a fundamental challenge for trustworthy AI: platforms and regulators are expected to detect and mitigate high-risk models, yet legal, ethical, and adversarial constraints make direct content generation or inspection infeasible.

In this talk, I argue that securing open-weight generative ecosystems requires a shift from downstream content moderation to upstream, generation-free risk assessment at the level of model parameters. I highlight recent work showing that malicious or abusive fine-tuning objectives leave detectable signatures in weight space, enabling scalable screening and monitoring without prompting models, generating outputs, or accessing training data. More broadly, I outline a research agenda for weight-space accountability as a security primitive for open generative AI, with implications for platform governance, regulatory compliance, and the design of preventive safeguards as AI development continues to decentralize.

Speaker Bio

Ashia Wilson is a Lister Brothers Career Development Assistant Professor at MIT whose research builds the theory and practice of reliable AI. Her group studies four core themes: privacy and unlearning mechanisms for modern models, optimization and sampling methods for large-scale training, the dynamics of homogenization and algorithmic influence, and evaluation frameworks that enable rigorous measurement of model behavior. She draws on statistics, optimization, and dynamical systems to analyze and design AI systems that are both scientifically grounded and socially responsible. Ashia earned her Ph.D. in statistics from UC Berkeley and previously held a postdoctoral position at Microsoft Research. Her work has been recognized with best paper and spotlight awards at FAccT, NeurIPS, and OptML.

Lorenzo Cavallaro

Keynote 4:
Trustworthy AI… for Systems Security

Lorenzo Cavallaro, University College London
Abstract

No day goes by without reading about machine learning (ML) success stories in every walk of life. Systems security is no exception, where ML’s tantalizing performance may leave us wondering whether any problems remain unsolved. Yet ML has no clairvoyant abilities, and once the magic wears off, we are left in uncharted territory. Can it truly help us build secure systems? In this talk, I will argue that performance alone is not enough. I will highlight the consequences of adversarial attacks and distribution shifts in realistic settings, and discuss how semantics may provide a path forward. My goal is to foster a deeper understanding of machine learning’s role in systems security and its potential for future advancements.

Speaker Bio

Lorenzo Cavallaro grew up on pizza, spaghetti, and Phrack, and soon developed a passion for underground and academic research. He is a Full Professor of Computer Science at University College London (UCL), where he leads the Systems Security Research Lab — https://s2lab.cs.ucl.ac.uk. Lorenzo’s research vision is to enhance the effectiveness of machine learning for systems security in adversarial settings. To this end, he and his team investigate the interplay among program analysis abstractions, engineered and learned representations, and grounded models, and their crucial role in creating Trustworthy AI for Systems Security. Lorenzo publishes at and sits on the Program Committee of leading conferences in computer security and ML, received the Distinguished Paper Award at USENIX Security 2022, ICML 2024 Spotlight Paper, and DLSP 2025 Best Paper Award (co-located with IEEE S&P). He is also Associate Editor of ACM TOPS and IEEE TDSC. Lorenzo is co-founder & Chief Scientific Officer of BynarIO — https://bynar.io, a startup that's pioneering AI to autonomously identify and repair vulnerabilities in software, restoring trust and control over what you run. In addition to his love for food, Lorenzo finds his Flow in science, music, and family.