Yves-Alexandre de Montjoye's Avatar

Yves-Alexandre de Montjoye

@yvesalexandre.bsky.social

Professor of Applied Mathematics and CS at Imperial College London (πŸ‡¬πŸ‡§). MIT PhD. I'm working on automated privacy attacks, LLM memorization, and AI Safety. Road cyclist 🚴 and former EU Special Adviser (πŸ‡ͺπŸ‡Ί).

192 Followers  |  20 Following  |  45 Posts  |  Joined: 19.11.2024
Posts Following

Posts by Yves-Alexandre de Montjoye (@yvesalexandre.bsky.social)

New work from the team on identifying memorized training samples for free

26.06.2025 16:27 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Alignment Under Pressure: The Case for Informed Adversaries When Evaluating LLM Defenses Large language models (LLMs) are rapidly deployed in real-world applications ranging from chatbots to agentic systems. Alignment is one of the main approaches used to defend against attacks such as pr...

➑️ Read the full paper here: arxiv.org/abs/2505.15738

This is work with my amazing students πŸ§‘β€πŸŽ“ at Imperial College London: Xiaoxue Yang, Bozhidar Stevanoski and Matthieu Meeus

20.06.2025 10:50 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

To properly defend LLM agents against prompt injection, we need 1️⃣ better defenses which are robust against informed adversaries, and 2️⃣ account for these vulnerabilities even in β€œaligned” LLMs when deploying them as agents.

20.06.2025 10:50 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ’¬ Does this mean the existing alignment-based defenses πŸ›‘οΈ are not useful? No! But they are likely more brittle than previously believed.

20.06.2025 10:50 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

More specifically, it uses intermediate training checkpoints as β€œstepping stones” πŸ‘£πŸͺ¨ to craft attacks against the final aligned model. This is hugely successful with the suffixes found by Checkpoint-GCG, bypassing SOTA defenses such as SecAlign 90%+ of the time 🎯.

20.06.2025 10:50 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We propose Checkpoint-GCG, an attack method that assumes an informed adversary with some knowledge of the alignment mechanism 🧭.

20.06.2025 10:50 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ€” How would we know this though? We propose to use informed adversaries – attackers with more knowledge than currently seems β€œrealistic”, to evaluate the robustness of defenses against future, yet-unknown attacks like we do in privacy.

20.06.2025 10:50 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

With LLMs being integrated into systems everywhere and deployed as agents, we however argue that this is not enough ⚠️. We cannot constantly pen-and-patch, patching LLMs every time a new attack is discovered. We need to ensure our defenses are robust and future-proof 🦾.

20.06.2025 10:50 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Recent methods claim near-perfect protection against existing red teaming attacks, including GCG, which automatically finds adversarial suffixes to manipulate model behaviour.

20.06.2025 10:50 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ›‘οΈ Today’s defenses against prompt injection typically rely on alignment-based training, teaching LLMs to ignore injected instructions πŸ’‰.

20.06.2025 10:50 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Sophisticated prompt injection attacks are often done by pairing instructions with adversarial suffixes πŸ’£ that trick models into following the injected instructions.

20.06.2025 10:50 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This is known as prompt injection πŸ’‰, where malicious actors hide instructions in files or web pages (like invisible white text) that manipulate the LLM’s behaviour.

20.06.2025 10:50 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Have you ever uploaded a PDF πŸ“„ to ChatGPT πŸ€– and asked for a summary? There is a chance the model followed hidden instructions inside the file instead of your prompt 😈

A thread 🧡

20.06.2025 10:50 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Openings - Computational Privacy Group, Imperial College London Openings in the Computational Privacy Group at Imperial College London

πŸ“ Imperial College London
πŸ“…Start: October 2025
⏳Application deadline: June 6th
πŸ“©Application steps: cpg.doc.ic.ac.uk/openings/

20.05.2025 10:33 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This is an exciting opportunity for technically strong and curious candidates who want to do meaningful research that influences both academia and industry. If you’re weighing the next step in your career, we offer a path to impactful, high-quality research with freedom to explore

20.05.2025 10:33 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

To see more of our work and get to know the team, check here (cpg.doc.ic.ac.uk)!

20.05.2025 10:33 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

βœ…Can individuals be re-identified even from aggregated statistics? (arxiv.org/abs/2504.18497)

βœ…How can we efficiently identify training samples at risk of leaking in ML models? (arxiv.org/abs/2411.05743)

20.05.2025 10:33 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

βœ…How can we rigorously measure what LLMs memorize? (arxiv.org/abs/2406.17975)

βœ…How can we automatically discover privacy vulnerabilities in query-based systems at scale and in practice? (arxiv.org/abs/2409.01992)

20.05.2025 10:33 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Happy to share that we are offering one additional fully-funded PhD position starting in Fall 2025! Our research group at Imperial College London works on machine learning and data privacy and security.

Recently, we tackled questions such as:

20.05.2025 10:33 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

🚨One (more!) fully-funded PhD position in our group at Imperial College London – Privacy & Machine Learning πŸ”πŸ€– starting Oct 2025

Plz RT πŸ”„

20.05.2025 10:33 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

Huge congrats to @spalab.cs.ucr.edu's Georgi Ganev for receiving the Distinguished Paper Award at IEEE S&P for his work "The Inadequacy of Similarity-based Privacy Metrics: Privacy Attacks against β€œTruly Anonymous” Synthetic Datasets."

Paper: arxiv.org/pdf/2312.051...

14.05.2025 17:51 β€” πŸ‘ 18    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0
Preview
Bid to host SaTML 2026 Thank you for considering to host SaTML! SaTML has been organized as a 3 day conference so far. We are looking for volunteers interested in finding a venue to host the conference in 2026. By submitti...

🌍 Help shape the future of SaTML!

We are on the hunt for a 2026 host city - and you could lead the way. Submit a bid to become General Chair of the conference:

forms.gle/vozsaXjCoPzc...

12.05.2025 12:15 β€” πŸ‘ 6    πŸ” 8    πŸ’¬ 0    πŸ“Œ 1
Preview
The DCR Delusion: Measuring the Privacy Risk of Synthetic Data Synthetic data has become an increasingly popular way to share data without revealing sensitive information. Though Membership Inference Attacks (MIAs) are widely considered the gold standard for empi...

Work with my amazing students and collaborators Zexi Yao, natasakrco.bsky.social, and Georgi Ganev.

πŸ”— Full paper: arxiv.org/abs/2505.01524

09.05.2025 12:26 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

What should I do then? Use MIAs. They are the rigorous and comprehensive standard for evaluating the privacy of synthetic data, including making legal anonymity claims, and when comparing models.

09.05.2025 12:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

DCR indeed only appears to catch the most obvious privacy failures, like synthetic datasets that contain large numbers of exact copies from the training data.

09.05.2025 12:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ“ DCR fails to detect privacy leakage, but could it still work as an inexpensive, directional signal for privacy risk? In our experiments, DCR shows no correlation with how vulnerable a dataset is to membership inference attacks.

09.05.2025 12:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

😨 The same holds for classical synthetic data generators (IndHist, Baynet, CTGAN): even when DCR marks their output as β€œprivate,” membership inference attacks can still correctly correctly infer the membership of up to 20% of the training records used to generate the synthetic data.

09.05.2025 12:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ˜Άβ€πŸŒ«οΈ Datasets generated by state-of-the-art tabular diffusion models (TabDDPM, ClavaDDPM) declared β€œprivate” by DCR are highly vulnerable to membership inference attacks (MIAs) – reaching up to 0.35 true positive rate (TPR) at a low false positive rate (FPR).

09.05.2025 12:21 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

How do you know your synthetic data is anonymous πŸ₯Έ?

If your answer is β€œwe checked Distance to Closest Record (DCR),” then… we might have bad news for you.

Our latest work shows DCR and other proxy metrics to be inadequate measures of the privacy risk of synthetic data.

09.05.2025 12:21 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

We hope DeSIA will now help do the same for what is arguably the most common data release in practice: aggregate statistics.

07.05.2025 11:15 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0