FΓ©licitations pour cette reconnaissance, GaΓ«l !
11.10.2025 02:09 β π 1 π 0 π¬ 0 π 0@nicolaspapernot.bsky.social
Security and Privacy of Machine Learning at UofT, Vector Institute, and Google π¨π¦π«π·πͺπΊ Co-Director of Canadian AI Safety Institute (CAISI) Research Program at CIFAR. Opinions mine
FΓ©licitations pour cette reconnaissance, GaΓ«l !
11.10.2025 02:09 β π 1 π 0 π¬ 0 π 0Congratulations to my fellow awardees Rose Yu (UCSD) and Lerrel Pinto (NYU)!
I enjoyed learning about the work of Yoshua, Rose, and Lerrel at the Samsung AI Forum earlier this week.
news.samsung.com/global/samsu...
Thank you to Samsung for the AI Researcher of 2025 award! I'm privileged to collaborate with many talented students & postdoctoral fellows @utoronto.ca @vectorinstitute.ai . This would not have been possible without them!
It was a great honour to receive the award from @yoshuabengio.bsky.social !
IEEE Conference on Secure and Trustworthy Machine Learning Technical University of Munich, Germany March 23β25, 2026
Three weeks to go until the SaTML 2026 deadline! β° We look forward to your work on security, privacy, and fairness in AI.
ποΈ Deadline: Sept 24, 2025
We have also updated our Call for Papers with a statement on LLM usage, check it out:
π satml.org/call-for-pap...
@satml.org
Congratulations Maksym, this is a great place to start your research group! Looking forward to following your work
06.08.2025 19:24 β π 1 π 0 π¬ 1 π 0Thank you to @schmidtsciences.bsky.social for funding our lab's work on cryptographic approaches for verifiable guarantees in ML systems and for connecting us to other groups working on these questions!
23.07.2025 16:20 β π 3 π 0 π¬ 0 π 0π Selective Prediction Via Training Dynamics
Paper β‘οΈ arxiv.org/abs/2205.13532
Workshop β‘οΈ 3rd Workshop on High-dimensional Learning Dynamics (HiLD)
Poster β‘οΈ West Meeting Room 118-120 on Sat 19 Jul 10:15 a.m. β 11:15 a.m. & 4:45 p.m. β 5:30 p.m.
π Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings (β¨ oral paper β¨)
Paper β‘οΈ arxiv.org/abs/2505.22356
Poster β‘οΈ E-504 on Thu 17 Jul 4:30 p.m. β 7 p.m.
Oral Presentation β‘οΈ West Ballroom C on Thu 17 Jul 4:15 p.m. β 4:30 p.m.
π Confidential Guardian: Cryptographically Prohibiting the Abuse of Model Abstention
TL;DR β‘οΈ We show that a model owner can artificially introduce uncertainty and provide a detection mechanism.
Paper β‘οΈ arxiv.org/abs/2505.23968
Poster β‘οΈ E-1002 on Wed 16 Jul 11 a.m. β 1:30 p.m.
FΓ©licitations Yoshua !! C'est plus que mΓ©ritΓ©.
12.06.2025 15:58 β π 0 π 0 π¬ 0 π 0Abstract. Differentially private stochastic gradient descent (DP-SGD) trains machine learning (ML) models with formal privacy guarantees for the training set by adding random noise to gradient updates. In collaborative learning (CL), where multiple parties jointly train a model, noise addition occurs either (i) before or (ii) during secure gradient aggregation. The first option is deployed in distributed DP methods, which require greater amounts of total noise to achieve security, resulting in degraded model utility. The second approach preserves model utility but requires a secure multiparty computation (MPC) protocol. Existing methods for MPC noise generation require tens to hundreds of seconds of runtime per noise sample because of the number of parties involved. This makes them impractical for collaborative learning, which often requires thousands or more samples of noise in each training step. We present a novel protocol for MPC noise sampling tailored to the collaborative learning setting. It works by constructing an approximation of the distribution of interest which can be efficiently sampled by a series of table lookups. Our method achieves significant runtime improvements and requires much less communication compared to previous work, especially at higher numbers of parties. It is also highly flexible β while previous MPC sampling methods tend to be optimized for specific distributions, we prove that our method can generically sample noise from statistically close approximations of arbitrary discrete distributions. This makes it compatible with a wide variety of DP mechanisms. Our experiments demonstrate the efficiency and utility of our method applied to a discrete Gaussian mechanism for differentially private collaborative learning. For 16 parties, we achieve a runtime of 0.06 seconds and 11.59 MB total communication per sample, a 230Γ runtime improvement and 3Γ less communication compared to the prior state-of-the-art for sampling from discrete Gaussian distribution in MPC.
Image showing part 2 of abstract.
Secure Noise Sampling for Differentially Private Collaborative Learning (Olive Franzese, Congyu Fang, Radhika Garg, Somesh Jha, Nicolas Papernot, Xiao Wang, Adam Dziedzic) ia.cr/2025/1025
02.06.2025 20:28 β π 2 π 1 π¬ 0 π 1Excited to share the first batch of research projects funded through the Canadian AI Safety Institute's research program at CIFAR!
The projects will tackle topics ranging from misinformation to safety in AI applications to scientific discovery.
Learn more: cifar.ca/cifarnews/20...
π’ New ICML 2025 paper!
Confidential Guardian: Cryptographically Prohibiting the Abuse of Model Abstention
π€ Think model uncertainty can be trusted?
We show that it can be misusedβand how to stop it!
Meet Mirage (our attackπ₯) & Confidentialβ―Guardian (our defenseπ‘οΈ).
π§΅1/10
If you are submitting to @ieeessp.bsky.social
this year, a friendly reminder that there is an abstract submission deadline this Thursday May 29 (AoE).
More details: sp2026.ieee-security.org/cfpapers.html
As part of the theme Societal Aspects of Securing the Digital Society, I will be hiring PhD students and postdocs at #MPI-SP, focusing in particular on the computational and sociotechnical aspects of technology regulations and the governance of emerging tech. Get in touch if interested.
23.05.2025 14:12 β π 2 π 3 π¬ 0 π 0π Help shape the future of SaTML!
We are on the hunt for a 2026 host city - and you could lead the way. Submit a bid to become General Chair of the conference:
forms.gle/vozsaXjCoPzc...
Excited to be in Singapore for ICLR, presenting our work on privacy auditing (w/ AurΓ©lien & @nicolaspapernot.bsky.social). If you are interested in differential privacy/privacy auditing/security for ML, drop by (#497 26 Apr 10-12:30 pm) or let's grab a coffee! β
openreview.net/forum?id=xzK...
Congrats on what looks like an amazing event, Konrad!
10.04.2025 16:11 β π 1 π 0 π¬ 0 π 0Very exciting! Congratulations to the organizing team on what looks like an amazing event!
09.04.2025 14:16 β π 0 π 0 π¬ 0 π 0π Welcome to #SaTML25! Kicking things off with opening remarks --- excited for a packed schedule of keynotes, talks and competitions on secure and trustworthy machine learning.
09.04.2025 07:14 β π 6 π 3 π¬ 1 π 0Image shows Karina Vold
Karina Vold says the rapid development of AI systems has left both philosophers & computer scientists grappling with difficult questions. #UofT π» uoft.me/bsp
01.04.2025 14:00 β π 13 π 8 π¬ 2 π 4Congratulations again, Stephan, on this brilliant next step! Looking forward to what you will accomplish with @randomwalker.bsky.social & @msalganik.bsky.social!
13.03.2025 07:50 β π 6 π 0 π¬ 0 π 0The Canadian AI Safety Institute (CAISI) Research Program at CIFAR is now accepting Expressions of Interest for Solution Networks in AI Safety under two themes:
* Mitigating the Safety Risks of Synthetic Content
* AI Safety in the Global South.
cifar.ca/ai/ai-and-so...
I think the talk is being streamed but only internally within the MPI. I'm not sure if you still have access from your time there?
08.03.2025 16:06 β π 1 π 0 π¬ 0 π 0I will be giving a talk at the MPI-IS @maxplanckcampus.bsky.social in TΓΌbingen next week (March 12 @ 11am). The talk will cover my group's overall approach to trust in ML, with a focus on our work on unlearning and how to obtain verifiable guarantees of trust.
Details: is.mpg.de/events/speci...
Great news! Congrats Xiao!
18.02.2025 21:56 β π 0 π 0 π¬ 0 π 0For Canadian colleagues, CIFAR and the CPI at UWaterloo are sponsoring a special issue "Artificial Intelligence Safety and Public Policy in Canada" in Canadian Public Policy / Analyse de politiques
More details: www.cpp-adp.ca
One of the first components of the CAISI (Canadian AI Safety Institute) research program has just launched: a call for Catalyst Grant Projects on AI Safety.
Funding: up to 100K for one year
Deadline to apply: February 27, 2025 (11:59, AoE)
More details: cifar.ca/ai/cifar-ai-...
The list of accepted papers for @satml.org 2025 is now online:
π satml.org/accepted-pap...
If youβre intrigued by secure and trustworthy machine learning, join us April 9-11 in Copenhagen, Denmark π©π°. Find more details here:
π satml.org/attend/
It is organized this year by the amazing Konrad Rieck and @someshjha.bsky.social , you should attend!
When: April 9-11
Where: Copenhagen