Back to top

Competitions

Competition Track

SaTML traditionally includes a Competition Track as part of its program. Participants are invited to engage in selected data science competitions, competing to achieve the highest score on relevant machine learning or security tasks. These tasks are based on well-defined problems and corresponding datasets provided by the competition organizers. The competition results will be presented and discussed during dedicated sessions at the conference (see Call for Competitions for details).

Accepted Competitions

For this year, the following competitions have been accepted for the conference. Interested researchers can participate in any of these by following the instructions provided on the competition websites. For more information or specific inquiries, please contact the respective competition organizers directly.

🏁 Adaptive Prompt Injection: LLMail Inject

Website: https://microsoft.github.io/llmail-inject/

This competition invites participants to navigate and evade multiple prompt injection defences within an LLM-integrated email client. As an attacker, your goal is to craft emails with instructions for the LLM to execute your chosen task while avoiding detection. The competition is structured into various scenarios, each reflecting different levels of attacker knowledge and requiring successful email delivery, retrieval, and processing.

Organizers: Sahar Abdelnabi, Giovanni Cherubin, Aideen Fay, Andrew Paverd, Mark Russinovich, Ahmed Salem, Egor Zverev, and Javier Rando.

🏁 Inference Attacks Against Document VQA

Website: https://benchmarks.elsa-ai.eu/?ch=2&com=introduction

This competition invites the development of inference attacks to extract sensitive information from Document Visual Question Answering models. These models are especially vulnerable to leakage of private data, and so are especially relevant in the contexts of privacy risks and privacy-preserving ML. We provide a competition framework to encourage the implementation of practical methods that expose such privacy vulnerabilities, with the aim of better understanding real-world threat models and the robustness of differential privacy in multimodal models.

Organizers: Dimosthenis Karatzas, Andrey Barsky, Mohamed Ali Souibgui, Khanh Nguyen, Raouf Kerkouche, Marlon Tobaben, Kangsoo Jung, Joonas Jälkö, Vincent Poulain, Aurélie Joseph, Ernest Valveny, Josep Lladós, Catuscia Palamidessi, Antti Honkela, and Mario Fritz.

🏁 Membership Inference on Diffusion-model-based Synthetic Tabular Data

Website: https://vectorinstitute.github.io/MIDST

The MIDST challenge invites participants to assess the privacy risks of synthetic tabular data generated by diffusion models using membership inference attacks. Participants will be able to develop and apply strategies in both blackbox and whitebox settings to determine if specific data points were included in training during the synthesis of single or multi-relational tables.

Organizers: Masoumeh Shafieinejad, Xi He, John Jewell, Mahshid Alinoori, Sana Ayromlou, Wei Pang, Gauri Sharma, Veronica Chatrath, and Deval Pandya.

🏁 Robust Android Malware Detection Competition

Website: https://ramd-competition.github.io

The Robust Android Malware Detection Competition aims to evaluate machine learning-based detectors with respect to (i) temporal data drift due to the evolution of both malware and legitimate applications and (ii) adversarial manipulations of malware samples to evade detection. The competition consists of three separate tracks (i.e., Adversarial Robustness to Feature-space Attacks, Adversarial Robustness to Problem-space Attacks, and Temporal Robustness to Data Drift).

Organizers: Angelo Sotgiu, Maura Pintor, Ambra Demontis, and Battista Biggio.