Online anonymity has always relied on "practical obscurity" — the idea that deanonymization is possible but too costly to do at scale. We show this assumption no longer holds. LLMs make it cheap, fast, and automated.
Not sure what to do about this...
20.02.2026 18:26 —
👍 0
🔁 0
💬 0
📌 0
Recent LLM forecasters are getting better at predicting the future. But there's a challenge: How can we evaluate and compare AI forecasters without waiting years to see which predictions were right? (1/11)
11.01.2025 01:53 —
👍 5
🔁 2
💬 1
📌 0
I am in beautiful Vancouver for #NeurIPS2024 with those amazing folks!
Say hi if you want to chat about ML privacy and security
(or speciality ☕)
10.12.2024 19:48 —
👍 0
🔁 1
💬 0
📌 0
🔥 I'm thrilled that I'll be spending next year in the group of @floriantramer.bsky.social at ETH Zurich, working on privacy and memorization in ML 🔥
(Not an announcement, just what I usually do. It's a really great group full of incredibly amazing people that I am thrilled to work with every day!)
06.12.2024 16:29 —
👍 3
🔁 0
💬 0
📌 0
If it's curated by @javirandor.com then you know it's amazing!
04.12.2024 12:46 —
👍 1
🔁 0
💬 0
📌 0
Zurich is a great place to live and do research. It became a slightly better one overnight! Excited to see OAI opening an office here with such a great starting team 🎉
04.12.2024 09:46 —
👍 9
🔁 2
💬 1
📌 1
Gradient Masking All-at-Once: Ensemble Everything Everywhere Is Not Robust
Ensemble everything everywhere is a defense to adversarial examples that was recently proposed to make image classifiers robust. This defense works by ensembling a model's intermediate representations...
Ensemble Everything Everywhere is a defense against adversarial examples that people got quite exited about a few months ago (in particular, the defense causes "perceptually aligned" gradients just like adversarial training)
Unfortunately, we show it's not robust...
arxiv.org/abs/2411.14834
25.11.2024 08:38 —
👍 28
🔁 9
💬 1
📌 0