Tudor Cebere's Avatar

Tudor Cebere

@tcebere.bsky.social

PhD student in differential privacy & learning at Inria πŸ‡«πŸ‡·

36 Followers  |  136 Following  |  1 Posts  |  Joined: 14.11.2024  |  1.3153

Latest posts by tcebere.bsky.social on Bluesky

Beyond Laplace and Gaussian: Exploring the Generalized Gaussian Mechanism for Private Machine Learning

Roy Rinberg, Ilia Shumailov, Vikrant Singhal, Rachel Cummings, Nicolas Papernot

http://arxiv.org/abs/2506.12553

Differential privacy (DP) is obtained by randomizing a data analysis
algorithm, which necessarily introduces a tradeoff between its utility and
privacy. Many DP mechanisms are built upon one of two underlying tools: Laplace
and Gaussian additive noise mechanisms. We expand the search space of
algorithms by investigating the Generalized Gaussian mechanism, which samples
the additive noise term $x$ with probability proportional to $e^{-\frac{| x
|}{\sigma}^{\beta} }$ for some $\beta \geq 1$. The Laplace and Gaussian
mechanisms are special cases of GG for $\beta=1$ and $\beta=2$, respectively.
  In this work, we prove that all members of the GG family satisfy differential
privacy, and provide an extension of an existing numerical accountant (the PRV
accountant) for these mechanisms. We show that privacy accounting for the GG
Mechanism and its variants is dimension independent, which substantially
improves computational costs of privacy accounting.
  We apply the GG mechanism to two canonical tools for private machine
learning, PATE and DP-SGD; we show empirically that $\beta$ has a weak
relationship with test-accuracy, and that generally $\beta=2$ (Gaussian) is
nearly optimal. This provides justification for the widespread adoption of the
Gaussian mechanism in DP learning, and can be interpreted as a negative result,
that optimizing over $\beta$ does not lead to meaningful improvements in
performance.

Beyond Laplace and Gaussian: Exploring the Generalized Gaussian Mechanism for Private Machine Learning Roy Rinberg, Ilia Shumailov, Vikrant Singhal, Rachel Cummings, Nicolas Papernot http://arxiv.org/abs/2506.12553 Differential privacy (DP) is obtained by randomizing a data analysis algorithm, which necessarily introduces a tradeoff between its utility and privacy. Many DP mechanisms are built upon one of two underlying tools: Laplace and Gaussian additive noise mechanisms. We expand the search space of algorithms by investigating the Generalized Gaussian mechanism, which samples the additive noise term $x$ with probability proportional to $e^{-\frac{| x |}{\sigma}^{\beta} }$ for some $\beta \geq 1$. The Laplace and Gaussian mechanisms are special cases of GG for $\beta=1$ and $\beta=2$, respectively. In this work, we prove that all members of the GG family satisfy differential privacy, and provide an extension of an existing numerical accountant (the PRV accountant) for these mechanisms. We show that privacy accounting for the GG Mechanism and its variants is dimension independent, which substantially improves computational costs of privacy accounting. We apply the GG mechanism to two canonical tools for private machine learning, PATE and DP-SGD; we show empirically that $\beta$ has a weak relationship with test-accuracy, and that generally $\beta=2$ (Gaussian) is nearly optimal. This provides justification for the widespread adoption of the Gaussian mechanism in DP learning, and can be interpreted as a negative result, that optimizing over $\beta$ does not lead to meaningful improvements in performance.

Beyond Laplace and Gaussian: Exploring the Generalized Gaussian Mechanism for Private Machine Learning

Roy Rinberg, Ilia Shumailov, Vikrant Singhal, Rachel Cummings, Nicolas Papernot

http://arxiv.org/abs/2506.12553

17.06.2025 03:49 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Tighter Privacy Auditing of DP-SGD in the Hidden State Threat Model Machine learning models can be trained with formal privacy guarantees via differentially private optimizers such as DP-SGD. In this work, we focus on a threat model where the adversary has access...

Excited to be in Singapore for ICLR, presenting our work on privacy auditing (w/ AurΓ©lien & @nicolaspapernot.bsky.social). If you are interested in differential privacy/privacy auditing/security for ML, drop by (#497 26 Apr 10-12:30 pm) or let's grab a coffee! β˜•

openreview.net/forum?id=xzK...

21.04.2025 14:57 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Postdoc opportunity: if interested in a postdoc related to sketching starting Summer/Fall'25, especially applied to more efficient foundation model architectures (e.g. faster approx attention), please follow the instructions on the left column of theory.cs.berkeley.edu/postdoc.html by Jan 31

07.01.2025 11:25 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

@tcebere is following 19 prominent accounts