Nicolas Papernot's Avatar

Nicolas Papernot

@nicolaspapernot.bsky.social

Security and Privacy of Machine Learning at UofT, Vector Institute, and Google πŸ‡¨πŸ‡¦πŸ‡«πŸ‡·πŸ‡ͺπŸ‡Ί Co-Director of Canadian AI Safety Institute (CAISI) Research Program at CIFAR. Opinions mine

565 Followers  |  225 Following  |  21 Posts  |  Joined: 23.11.2024  |  2.0348

Latest posts by nicolaspapernot.bsky.social on Bluesky

Thank you to @schmidtsciences.bsky.social for funding our lab's work on cryptographic approaches for verifiable guarantees in ML systems and for connecting us to other groups working on these questions!

23.07.2025 16:20 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

πŸ“„ Selective Prediction Via Training Dynamics
Paper ➑️ arxiv.org/abs/2205.13532
Workshop ➑️ 3rd Workshop on High-dimensional Learning Dynamics (HiLD)
Poster ➑️ West Meeting Room 118-120 on Sat 19 Jul 10:15 a.m. β€” 11:15 a.m. & 4:45 p.m. β€” 5:30 p.m.

11.07.2025 20:03 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸ“„ Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings (✨ oral paper ✨)
Paper ➑️ arxiv.org/abs/2505.22356
Poster ➑️ E-504 on Thu 17 Jul 4:30 p.m. β€” 7 p.m.
Oral Presentation ➑️ West Ballroom C on Thu 17 Jul 4:15 p.m. β€” 4:30 p.m.

11.07.2025 20:03 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸ“„ Confidential Guardian: Cryptographically Prohibiting the Abuse of Model Abstention
TL;DR ➑️ We show that a model owner can artificially introduce uncertainty and provide a detection mechanism.
Paper ➑️ arxiv.org/abs/2505.23968
Poster ➑️ E-1002 on Wed 16 Jul 11 a.m. β€” 1:30 p.m.

11.07.2025 20:03 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

FΓ©licitations Yoshua !! C'est plus que mΓ©ritΓ©.

12.06.2025 15:58 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Abstract. Differentially private stochastic gradient descent (DP-SGD) trains machine learning (ML) models with formal privacy guarantees for the training set by adding random noise to gradient updates. In collaborative learning (CL), where multiple parties jointly train a model, noise addition occurs either (i) before or (ii) during secure gradient aggregation. The first option is deployed in distributed DP methods, which require greater amounts of total noise to achieve security, resulting in degraded model utility. The second approach preserves model utility but requires a secure multiparty computation (MPC) protocol. Existing methods for MPC noise generation require tens to hundreds of seconds of runtime per noise sample because of the number of parties involved. This makes them impractical for collaborative learning, which often requires thousands or more samples of noise in each training step.

We present a novel protocol for MPC noise sampling tailored to the collaborative learning setting. It works by constructing an approximation of the distribution of interest which can be efficiently sampled by a series of table lookups. Our method achieves significant runtime improvements and requires much less communication compared to previous work, especially at higher numbers of parties. It is also highly flexible – while previous MPC sampling methods tend to be optimized for specific distributions, we prove that our method can generically sample noise from statistically close approximations of arbitrary discrete distributions. This makes it compatible with a wide variety of DP mechanisms. Our experiments demonstrate the efficiency and utility of our method applied to a discrete Gaussian mechanism for differentially private collaborative learning. For 16 parties, we achieve a runtime of 0.06 seconds and 11.59 MB total communication per sample, a 230Γ— runtime improvement and 3Γ— less communication compared to the prior state-of-the-art for sampling from discrete Gaussian distribution in MPC.

Abstract. Differentially private stochastic gradient descent (DP-SGD) trains machine learning (ML) models with formal privacy guarantees for the training set by adding random noise to gradient updates. In collaborative learning (CL), where multiple parties jointly train a model, noise addition occurs either (i) before or (ii) during secure gradient aggregation. The first option is deployed in distributed DP methods, which require greater amounts of total noise to achieve security, resulting in degraded model utility. The second approach preserves model utility but requires a secure multiparty computation (MPC) protocol. Existing methods for MPC noise generation require tens to hundreds of seconds of runtime per noise sample because of the number of parties involved. This makes them impractical for collaborative learning, which often requires thousands or more samples of noise in each training step. We present a novel protocol for MPC noise sampling tailored to the collaborative learning setting. It works by constructing an approximation of the distribution of interest which can be efficiently sampled by a series of table lookups. Our method achieves significant runtime improvements and requires much less communication compared to previous work, especially at higher numbers of parties. It is also highly flexible – while previous MPC sampling methods tend to be optimized for specific distributions, we prove that our method can generically sample noise from statistically close approximations of arbitrary discrete distributions. This makes it compatible with a wide variety of DP mechanisms. Our experiments demonstrate the efficiency and utility of our method applied to a discrete Gaussian mechanism for differentially private collaborative learning. For 16 parties, we achieve a runtime of 0.06 seconds and 11.59 MB total communication per sample, a 230Γ— runtime improvement and 3Γ— less communication compared to the prior state-of-the-art for sampling from discrete Gaussian distribution in MPC.

Image showing part 2 of abstract.

Image showing part 2 of abstract.

Secure Noise Sampling for Differentially Private Collaborative Learning (Olive Franzese, Congyu Fang, Radhika Garg, Somesh Jha, Nicolas Papernot, Xiao Wang, Adam Dziedzic) ia.cr/2025/1025

02.06.2025 20:28 β€” πŸ‘ 2    πŸ” 2    πŸ’¬ 0    πŸ“Œ 1
Post image

Excited to share the first batch of research projects funded through the Canadian AI Safety Institute's research program at CIFAR!

The projects will tackle topics ranging from misinformation to safety in AI applications to scientific discovery.

Learn more: cifar.ca/cifarnews/20...

05.06.2025 14:21 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

πŸ“’ New ICML 2025 paper!

Confidential Guardian: Cryptographically Prohibiting the Abuse of Model Abstention

πŸ€” Think model uncertainty can be trusted?
We show that it can be misusedβ€”and how to stop it!
Meet Mirage (our attackπŸ’₯) & Confidentialβ€―Guardian (our defenseπŸ›‘οΈ).

🧡1/10

02.06.2025 14:38 β€” πŸ‘ 3    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Post image

If you are submitting to @ieeessp.bsky.social
this year, a friendly reminder that there is an abstract submission deadline this Thursday May 29 (AoE).

More details: sp2026.ieee-security.org/cfpapers.html

27.05.2025 12:49 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

As part of the theme Societal Aspects of Securing the Digital Society, I will be hiring PhD students and postdocs at #MPI-SP, focusing in particular on the computational and sociotechnical aspects of technology regulations and the governance of emerging tech. Get in touch if interested.

23.05.2025 14:12 β€” πŸ‘ 3    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Preview
Bid to host SaTML 2026 Thank you for considering to host SaTML! SaTML has been organized as a 3 day conference so far. We are looking for volunteers interested in finding a venue to host the conference in 2026. By submitti...

🌍 Help shape the future of SaTML!

We are on the hunt for a 2026 host city - and you could lead the way. Submit a bid to become General Chair of the conference:

forms.gle/vozsaXjCoPzc...

12.05.2025 12:15 β€” πŸ‘ 6    πŸ” 8    πŸ’¬ 0    πŸ“Œ 1
Preview
Tighter Privacy Auditing of DP-SGD in the Hidden State Threat Model Machine learning models can be trained with formal privacy guarantees via differentially private optimizers such as DP-SGD. In this work, we focus on a threat model where the adversary has access...

Excited to be in Singapore for ICLR, presenting our work on privacy auditing (w/ AurΓ©lien & @nicolaspapernot.bsky.social). If you are interested in differential privacy/privacy auditing/security for ML, drop by (#497 26 Apr 10-12:30 pm) or let's grab a coffee! β˜•

openreview.net/forum?id=xzK...

21.04.2025 14:57 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Congrats on what looks like an amazing event, Konrad!

10.04.2025 16:11 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Very exciting! Congratulations to the organizing team on what looks like an amazing event!

09.04.2025 14:16 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

πŸ‘‹ Welcome to #SaTML25! Kicking things off with opening remarks --- excited for a packed schedule of keynotes, talks and competitions on secure and trustworthy machine learning.

09.04.2025 07:14 β€” πŸ‘ 6    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Image shows Karina Vold

Image shows Karina Vold

Karina Vold says the rapid development of AI systems has left both philosophers & computer scientists grappling with difficult questions. #UofT πŸ’» uoft.me/bsp

01.04.2025 14:00 β€” πŸ‘ 13    πŸ” 8    πŸ’¬ 2    πŸ“Œ 4

Congratulations again, Stephan, on this brilliant next step! Looking forward to what you will accomplish with @randomwalker.bsky.social & @msalganik.bsky.social!

13.03.2025 07:50 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

The Canadian AI Safety Institute (CAISI) Research Program at CIFAR is now accepting Expressions of Interest for Solution Networks in AI Safety under two themes:

* Mitigating the Safety Risks of Synthetic Content
* AI Safety in the Global South.

cifar.ca/ai/ai-and-so...

12.03.2025 19:20 β€” πŸ‘ 8    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

I think the talk is being streamed but only internally within the MPI. I'm not sure if you still have access from your time there?

08.03.2025 16:06 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

I will be giving a talk at the MPI-IS @maxplanckcampus.bsky.social in TΓΌbingen next week (March 12 @ 11am). The talk will cover my group's overall approach to trust in ML, with a focus on our work on unlearning and how to obtain verifiable guarantees of trust.

Details: is.mpg.de/events/speci...

05.03.2025 15:40 β€” πŸ‘ 7    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0

Great news! Congrats Xiao!

18.02.2025 21:56 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
English Menu The December issue is available. o

For Canadian colleagues, CIFAR and the CPI at UWaterloo are sponsoring a special issue "Artificial Intelligence Safety and Public Policy in Canada" in Canadian Public Policy / Analyse de politiques

More details: www.cpp-adp.ca

31.01.2025 19:59 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
CIFAR AI Catalyst Grants - CIFAR Encouraging new collaborations and original research projects in the field of machine learning, as well as its application to different sectors of science and society.

One of the first components of the CAISI (Canadian AI Safety Institute) research program has just launched: a call for Catalyst Grant Projects on AI Safety.

Funding: up to 100K for one year
Deadline to apply: February 27, 2025 (11:59, AoE)
More details: cifar.ca/ai/cifar-ai-...

31.01.2025 19:42 β€” πŸ‘ 8    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Accepted Papers

The list of accepted papers for @satml.org 2025 is now online:

πŸ“ƒ satml.org/accepted-pap...

If you’re intrigued by secure and trustworthy machine learning, join us April 9-11 in Copenhagen, Denmark πŸ‡©πŸ‡°. Find more details here:

πŸ‘‰ satml.org/attend/

21.01.2025 14:25 β€” πŸ‘ 15    πŸ” 7    πŸ’¬ 0    πŸ“Œ 2

It is organized this year by the amazing Konrad Rieck and @someshjha.bsky.social , you should attend!

When: April 9-11
Where: Copenhagen

14.01.2025 16:05 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

If you work at the intersection of security, privacy, and machine learning, or more broadly how to trust ML, SaTML is a small-scale conference with highly-relevant work where you'll be able to have high-quality conversations with colleagues working in your area.

14.01.2025 16:05 β€” πŸ‘ 12    πŸ” 5    πŸ’¬ 2    πŸ“Œ 0

Congrats @drolnick.bsky.social !

20.12.2024 17:50 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Hello world! The SaTML conference is now flying the blue skies!

SaTML is the IEEE Conference on Secure and Trustworthy Machine Learning. The 2025 iteration, chaired by @someshjha.bsky.social @mlsec.org, will be in beautiful Copenhagen!

Follow for the latest updates on the conference!
satml.org

20.12.2024 01:13 β€” πŸ‘ 16    πŸ” 6    πŸ’¬ 0    πŸ“Œ 0

Thanks Tegan!

15.12.2024 02:33 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Thank you @shakirm.bsky.social πŸ™‚

13.12.2024 12:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@nicolaspapernot is following 20 prominent accounts