Back to top

Keynote Talks

Keynote 1:
Malice, Models and Middlemen

Michael Veale, University College London
Abstract

In recent years, machine learning systems have demonstrated impressive outputs, yet their robustness and overall performance remain complex issues. Criminal enterprises, however, are less concerned with certain types of errors, as they can externalize the cost of these errors onto their victims. This dynamic is evident in areas like spam and fraud, where automated systems have long been prevalent. As criminal enterprises begin deploying machine learning systems for malicious purposes, it becomes increasingly challenging to directly identify and stop these actors. Disrupting their activities requires targeting the intermediaries that connect criminals to their victims, including communication providers, cloud compute services, operating systems providing local compute like smartphone providers, model marketplaces like Hugging Face and GitHub, and content platforms like social media services. The safety and trustworthiness of AI will likely depend on the cooperation and governance mechanisms established by these intermediaries.

Legal regimes and technical governance methods will play a significant role in this landscape. This involves monitoring cloud computing, controlling operations on operating systems, removing controversial dual-use models from online repositories, and requiring platforms to manage content decisions at scale. Each of these measures presents challenges, and achieving the right balance is complex. In this keynote, I will explore the role of intermediaries, examine emerging governance practices, and highlight the core tensions, difficulties, and opportunities in this evolving space.

Speaker Bio

Michael Veale is Associate Professor in Digital Rights and Regulation and Vice-Dean (Education Innovation) at the Faculty of Laws, University College London (UCL), and Fellow at the Institute for Information Law, University of Amsterdam. His research focusses on how to understand and address challenges of power and justice that digital technologies and their users create and exacerbate, in areas such as privacy-enhancing technologies and machine learning. Veale has advised a wide variety of actors across the world on these issues, and his work is cited by hundreds of public policy players including courts, regulators, governments, legislatures, civil society and business, as well as thousands of academics. He sits on the advisory councils for the Open Rights Group and Foxglove, the Panel of Expert of the Digital Freedom Fund, and the Technology Advisory Panel for the Information Commissioner’s Office. He holds a PhD in the governance of machine learning from the Faculty of Engineering, UCL, as well as degrees from LSE and U Maastricht.

Keynote 2:
The Science of Empirical Privacy Measurement: Memorization and Beyond

Kamalika Chaudhuri, University of California San Diego
Abstract

Since Fredriksson et al (2014), a body of work has emerged on the empirical measurement of privacy leakage, both from machine learning models and in other settings. In this talk, I will describe some recent advancements on this topic. First, we will look at memorization in vision encoder models, and propose a principled measurement of “deja vu memorization”; we will show how to scale it to off-the-shelf vision encoder models. Next, we will go beyond memorization, and talk about how privacy and memorization may be decoupled in more complicated settings.

Speaker Bio

Kamalika Chaudhuri is a Director, Research Scientist at FAIR in Meta and an adjunct professor at University of California, San Diego. She was formerly a full professor at University of California, San Diego. She received a Bachelor of Technology degree in Computer Science and Engineering in 2002 from Indian Institute of Technology, Kanpur, and a PhD in Computer Science from University of California at Berkeley in 2007. She received an NSF CAREER Award in 2013 and a Hellman Faculty Fellowship in 2012. She has served as the program co-chair for AISTATS 2019 and ICML 2019, and as the General Chair for ICML 2022.

Keynote 3:
Artificial Intelligence: Should you trust it?

Matt Turek, Defense Advanced Research Agency
Abstract

We have seen significant progress in Artificial Intelligence (AI) over the last ten years, predominantly driven by dramatic advances in machine learning and particularly deep learning. Society is realizing the benefits across a wide range of application domains. However, within the military, the consequence of making a wrong decision based on AI could be catastrophic. And the United States Department of Defense must defend against nation-state level adversaries with significant resources, the ability to create deception, and the desire to change our way of life. The US’s Defense Advanced Research Projects Agency (DARPA) is funding research in trustworthy AI technologies and systems that can be trusted to perform as expected despite the efforts of sophisticated adversaries. In this presentation, I will discuss research efforts in AI that we can trust with our (and warfighters’) lives and explore DARPA-funded advances that appear promising toward reaching the goal of trustworthy AI.

Speaker Bio

Matt Turek is the deputy office director for the Defense Advanced Research Agency’s (DARPA) Information Innovation Office (I2O), where he provides technical leadership and works with program managers to envision, create, and transition capabilities that ensure enduring information advantage for the United States and its allies. Previously, Turek served as I2O’s acting deputy director and as a program manager for AI-related programs, including Explainable AI, Machine Common Sense, Media Forensics, and Semantic Forensics. He joined DARPA from Kitware, Inc., where he led a team developing computer vision technologies. Prior to that role, he was at GE Global Research conducting research in medical imaging and industrial inspection.