Thank you to @schmidtsciences.bsky.social for funding our lab's work on cryptographic approaches for verifiable guarantees in ML systems and for connecting us to other groups working on these questions!
23.07.2025 16:20 β π 2 π 0 π¬ 0 π 0
π Selective Prediction Via Training Dynamics
Paper β‘οΈ arxiv.org/abs/2205.13532
Workshop β‘οΈ 3rd Workshop on High-dimensional Learning Dynamics (HiLD)
Poster β‘οΈ West Meeting Room 118-120 on Sat 19 Jul 10:15 a.m. β 11:15 a.m. & 4:45 p.m. β 5:30 p.m.
11.07.2025 20:03 β π 0 π 1 π¬ 1 π 0
π Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings (β¨ oral paper β¨)
Paper β‘οΈ arxiv.org/abs/2505.22356
Poster β‘οΈ E-504 on Thu 17 Jul 4:30 p.m. β 7 p.m.
Oral Presentation β‘οΈ West Ballroom C on Thu 17 Jul 4:15 p.m. β 4:30 p.m.
11.07.2025 20:03 β π 0 π 1 π¬ 1 π 0
π Confidential Guardian: Cryptographically Prohibiting the Abuse of Model Abstention
TL;DR β‘οΈ We show that a model owner can artificially introduce uncertainty and provide a detection mechanism.
Paper β‘οΈ arxiv.org/abs/2505.23968
Poster β‘οΈ E-1002 on Wed 16 Jul 11 a.m. β 1:30 p.m.
11.07.2025 20:03 β π 0 π 1 π¬ 1 π 0
FΓ©licitations Yoshua !! C'est plus que mΓ©ritΓ©.
12.06.2025 15:58 β π 0 π 0 π¬ 0 π 0
Abstract. Differentially private stochastic gradient descent (DP-SGD) trains machine learning (ML) models with formal privacy guarantees for the training set by adding random noise to gradient updates. In collaborative learning (CL), where multiple parties jointly train a model, noise addition occurs either (i) before or (ii) during secure gradient aggregation. The first option is deployed in distributed DP methods, which require greater amounts of total noise to achieve security, resulting in degraded model utility. The second approach preserves model utility but requires a secure multiparty computation (MPC) protocol. Existing methods for MPC noise generation require tens to hundreds of seconds of runtime per noise sample because of the number of parties involved. This makes them impractical for collaborative learning, which often requires thousands or more samples of noise in each training step.
We present a novel protocol for MPC noise sampling tailored to the collaborative learning setting. It works by constructing an approximation of the distribution of interest which can be efficiently sampled by a series of table lookups. Our method achieves significant runtime improvements and requires much less communication compared to previous work, especially at higher numbers of parties. It is also highly flexible β while previous MPC sampling methods tend to be optimized for specific distributions, we prove that our method can generically sample noise from statistically close approximations of arbitrary discrete distributions. This makes it compatible with a wide variety of DP mechanisms. Our experiments demonstrate the efficiency and utility of our method applied to a discrete Gaussian mechanism for differentially private collaborative learning. For 16 parties, we achieve a runtime of 0.06 seconds and 11.59 MB total communication per sample, a 230Γ runtime improvement and 3Γ less communication compared to the prior state-of-the-art for sampling from discrete Gaussian distribution in MPC.
Image showing part 2 of abstract.
Secure Noise Sampling for Differentially Private Collaborative Learning (Olive Franzese, Congyu Fang, Radhika Garg, Somesh Jha, Nicolas Papernot, Xiao Wang, Adam Dziedzic) ia.cr/2025/1025
02.06.2025 20:28 β π 2 π 2 π¬ 0 π 1
Excited to share the first batch of research projects funded through the Canadian AI Safety Institute's research program at CIFAR!
The projects will tackle topics ranging from misinformation to safety in AI applications to scientific discovery.
Learn more: cifar.ca/cifarnews/20...
05.06.2025 14:21 β π 5 π 0 π¬ 0 π 0
π’ New ICML 2025 paper!
Confidential Guardian: Cryptographically Prohibiting the Abuse of Model Abstention
π€ Think model uncertainty can be trusted?
We show that it can be misusedβand how to stop it!
Meet Mirage (our attackπ₯) & Confidentialβ―Guardian (our defenseπ‘οΈ).
π§΅1/10
02.06.2025 14:38 β π 3 π 3 π¬ 1 π 0
If you are submitting to @ieeessp.bsky.social
this year, a friendly reminder that there is an abstract submission deadline this Thursday May 29 (AoE).
More details: sp2026.ieee-security.org/cfpapers.html
27.05.2025 12:49 β π 2 π 1 π¬ 0 π 0
As part of the theme Societal Aspects of Securing the Digital Society, I will be hiring PhD students and postdocs at #MPI-SP, focusing in particular on the computational and sociotechnical aspects of technology regulations and the governance of emerging tech. Get in touch if interested.
23.05.2025 14:12 β π 3 π 3 π¬ 0 π 0
Tighter Privacy Auditing of DP-SGD in the Hidden State Threat Model
Machine learning models can be trained with formal privacy guarantees via differentially private optimizers such as DP-SGD. In this work, we focus on a threat model where the adversary has access...
Excited to be in Singapore for ICLR, presenting our work on privacy auditing (w/ AurΓ©lien & @nicolaspapernot.bsky.social). If you are interested in differential privacy/privacy auditing/security for ML, drop by (#497 26 Apr 10-12:30 pm) or let's grab a coffee! β
openreview.net/forum?id=xzK...
21.04.2025 14:57 β π 3 π 1 π¬ 0 π 0
Congrats on what looks like an amazing event, Konrad!
10.04.2025 16:11 β π 1 π 0 π¬ 0 π 0
Very exciting! Congratulations to the organizing team on what looks like an amazing event!
09.04.2025 14:16 β π 0 π 0 π¬ 0 π 0
π Welcome to #SaTML25! Kicking things off with opening remarks --- excited for a packed schedule of keynotes, talks and competitions on secure and trustworthy machine learning.
09.04.2025 07:14 β π 6 π 3 π¬ 1 π 0
Image shows Karina Vold
Karina Vold says the rapid development of AI systems has left both philosophers & computer scientists grappling with difficult questions. #UofT π» uoft.me/bsp
01.04.2025 14:00 β π 13 π 8 π¬ 2 π 4
Congratulations again, Stephan, on this brilliant next step! Looking forward to what you will accomplish with @randomwalker.bsky.social & @msalganik.bsky.social!
13.03.2025 07:50 β π 6 π 0 π¬ 0 π 0
The Canadian AI Safety Institute (CAISI) Research Program at CIFAR is now accepting Expressions of Interest for Solution Networks in AI Safety under two themes:
* Mitigating the Safety Risks of Synthetic Content
* AI Safety in the Global South.
cifar.ca/ai/ai-and-so...
12.03.2025 19:20 β π 8 π 3 π¬ 0 π 0
I think the talk is being streamed but only internally within the MPI. I'm not sure if you still have access from your time there?
08.03.2025 16:06 β π 1 π 0 π¬ 0 π 0
I will be giving a talk at the MPI-IS @maxplanckcampus.bsky.social in TΓΌbingen next week (March 12 @ 11am). The talk will cover my group's overall approach to trust in ML, with a focus on our work on unlearning and how to obtain verifiable guarantees of trust.
Details: is.mpg.de/events/speci...
05.03.2025 15:40 β π 7 π 4 π¬ 1 π 0
Great news! Congrats Xiao!
18.02.2025 21:56 β π 1 π 0 π¬ 0 π 0
English Menu
The December issue is available. o
For Canadian colleagues, CIFAR and the CPI at UWaterloo are sponsoring a special issue "Artificial Intelligence Safety and Public Policy in Canada" in Canadian Public Policy / Analyse de politiques
More details: www.cpp-adp.ca
31.01.2025 19:59 β π 5 π 1 π¬ 0 π 0
CIFAR AI Catalyst Grants - CIFAR
Encouraging new collaborations and original research projects in the field of machine learning, as well as its application to different sectors of science and society.
One of the first components of the CAISI (Canadian AI Safety Institute) research program has just launched: a call for Catalyst Grant Projects on AI Safety.
Funding: up to 100K for one year
Deadline to apply: February 27, 2025 (11:59, AoE)
More details: cifar.ca/ai/cifar-ai-...
31.01.2025 19:42 β π 8 π 2 π¬ 0 π 0
Accepted Papers
The list of accepted papers for @satml.org 2025 is now online:
π satml.org/accepted-pap...
If youβre intrigued by secure and trustworthy machine learning, join us April 9-11 in Copenhagen, Denmark π©π°. Find more details here:
π satml.org/attend/
21.01.2025 14:25 β π 15 π 7 π¬ 0 π 2
It is organized this year by the amazing Konrad Rieck and @someshjha.bsky.social , you should attend!
When: April 9-11
Where: Copenhagen
14.01.2025 16:05 β π 3 π 0 π¬ 0 π 0
If you work at the intersection of security, privacy, and machine learning, or more broadly how to trust ML, SaTML is a small-scale conference with highly-relevant work where you'll be able to have high-quality conversations with colleagues working in your area.
14.01.2025 16:05 β π 12 π 5 π¬ 2 π 0
Congrats @drolnick.bsky.social !
20.12.2024 17:50 β π 5 π 0 π¬ 0 π 0
Hello world! The SaTML conference is now flying the blue skies!
SaTML is the IEEE Conference on Secure and Trustworthy Machine Learning. The 2025 iteration, chaired by @someshjha.bsky.social @mlsec.org, will be in beautiful Copenhagen!
Follow for the latest updates on the conference!
satml.org
20.12.2024 01:13 β π 16 π 6 π¬ 0 π 0
Thanks Tegan!
15.12.2024 02:33 β π 3 π 0 π¬ 0 π 0
Thank you @shakirm.bsky.social π
13.12.2024 12:35 β π 1 π 0 π¬ 0 π 0
Assistant Professor, MIT EECS & LIDS | Co-founder & Chair, Climate Change AI | MIT Technology Review 35 Innovators Under 35 | she/they
Prime Minister of Canada and Leader of the Liberal Party | Premier ministre du Canada et chef du Parti libΓ©ral
markcarney.ca
Researching Trustworthy Machine Learning advised by Prof. Nicolas Papernot
PhD student at University of Toronto and Vector Institute
Postdoctoral Researcher @ TU Berlin β’ ML & Computer Security β’ eisenhofer.me
Lead AI Security & Privacy Research @Qualcomm
Assistant Prof at University of Waterloo, CIFAR AI Chair at Vector Institute. Formerly UWNLP, Stanford NLP, MSR, FAIR, Google Brain, Salesforce Research via MetaMind
#machinelearning, #nlp
victorzhong.com
Faculty at MPI-SP. Computer scientist researching data protection & governance, digital well-being, and responsible computing (IR/ML/AI).
https://asiabiega.github.io/
PhD at Max Planck Institute for Security and Privacy | HCI, consent, responsible data collection, tech policy
Machine Learning Professor
https://cims.nyu.edu/~andrewgw
CS PhD Student at NYU, previously @MetaAI. Trying to make ML more reliable, predictable, and representative.
Professor of Sociology, Princeton, www.princeton.edu/~mjs3
Author of Bit by Bit: Social Research in the Digital Age, bitbybitbook.com
Tenure-Track Faculty at CISPA β’ Cryptography & Provable Security
researcher studying privacy, security, reliability, and broader social implications of algorithmic systems.
website: https://kulyny.ch
Associate Professor of Computer Science at University of Toronto. Research at the intersection of AI, data, and society.
Assistant Professor of Computer Science at the University of Toronto.
I lead the PPS Lab (http://pps-lab.com).
Previously at UC Berkeley and ETH Zurich.