Jalaj Upadhyay's Avatar

Jalaj Upadhyay

@jalajupadhyay.bsky.social

Assistant Prof @Rutgers

92 Followers  |  96 Following  |  3 Posts  |  Joined: 18.11.2024  |  1.9101

Latest posts by jalajupadhyay.bsky.social on Bluesky

https://barc.ku.dk/wave-conference/

150 years ago the first women were enrolled at University of Copenhagen. To celebrate, four of my awesome BARC colleagues are arranging the WAVE workshop on October 10th.

Amazing speakers and panelists, full-day catering, and Copenhagen culture night in the evening! Join us!

t.co/zIeUnYVXMC

22.09.2025 18:44 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Normalized Square Root: Sharper Matrix Factorization Bounds for Differentially Private Continual Counting

Monika Henzinger, Nikita P. Kalinin, Jalaj Upadhyay

http://arxiv.org/abs/2509.14334

The factorization norms of the lower-triangular all-ones $n \times n$ matrix,
$\gamma_2(M_{count})$ and $\gamma_{F}(M_{count})$, play a central role in
differential privacy as they are used to give theoretical justification of the
accuracy of the only known production-level private training algorithm of deep
neural networks by Google. Prior to this work, the best known upper bound on
$\gamma_2(M_{count})$ was $1 + \frac{\log n}{\pi}$ by Mathias (Linear Algebra
and Applications, 1993), and the best known lower bound was $\frac{1}{\pi}(2 +
\log(\frac{2n+1}{3})) \approx 0.507 + \frac{\log n}{\pi}$ (Matou\v{s}ek,
Nikolov, Talwar, IMRN 2020), where $\log$ denotes the natural logarithm.
Recently, Henzinger and Upadhyay (SODA 2025) gave the first explicit
factorization that meets the bound of Mathias (1993) and asked whether there
exists an explicit factorization that improves on Mathias' bound. We answer
this question in the affirmative. Additionally, we improve the lower bound
significantly. More specifically, we show that $$
  0.701 + \frac{\log n}{\pi} + o(1) \;\leq\; \gamma_2(M_{count}) \;\leq\; 0.846
+ \frac{\log n}{\pi} + o(1). $$ That is, we reduce the gap between the upper
and lower bound to $0.14 + o(1)$.
  We also show that our factors achieve a better upper bound for
$\gamma_{F}(M_{count})$ compared to prior work, and we establish an improved
lower bound: $$
  0.701 + \frac{\log n}{\pi} + o(1) \;\leq\; \gamma_{F}(M_{count}) \;\leq\;
0.748 + \frac{\log n}{\pi} + o(1). $$ That is, the gap between the lower and
upper bound provided by our explicit factorization is $0.047 + o(1)$.

Normalized Square Root: Sharper Matrix Factorization Bounds for Differentially Private Continual Counting Monika Henzinger, Nikita P. Kalinin, Jalaj Upadhyay http://arxiv.org/abs/2509.14334 The factorization norms of the lower-triangular all-ones $n \times n$ matrix, $\gamma_2(M_{count})$ and $\gamma_{F}(M_{count})$, play a central role in differential privacy as they are used to give theoretical justification of the accuracy of the only known production-level private training algorithm of deep neural networks by Google. Prior to this work, the best known upper bound on $\gamma_2(M_{count})$ was $1 + \frac{\log n}{\pi}$ by Mathias (Linear Algebra and Applications, 1993), and the best known lower bound was $\frac{1}{\pi}(2 + \log(\frac{2n+1}{3})) \approx 0.507 + \frac{\log n}{\pi}$ (Matou\v{s}ek, Nikolov, Talwar, IMRN 2020), where $\log$ denotes the natural logarithm. Recently, Henzinger and Upadhyay (SODA 2025) gave the first explicit factorization that meets the bound of Mathias (1993) and asked whether there exists an explicit factorization that improves on Mathias' bound. We answer this question in the affirmative. Additionally, we improve the lower bound significantly. More specifically, we show that $$ 0.701 + \frac{\log n}{\pi} + o(1) \;\leq\; \gamma_2(M_{count}) \;\leq\; 0.846 + \frac{\log n}{\pi} + o(1). $$ That is, we reduce the gap between the upper and lower bound to $0.14 + o(1)$. We also show that our factors achieve a better upper bound for $\gamma_{F}(M_{count})$ compared to prior work, and we establish an improved lower bound: $$ 0.701 + \frac{\log n}{\pi} + o(1) \;\leq\; \gamma_{F}(M_{count}) \;\leq\; 0.748 + \frac{\log n}{\pi} + o(1). $$ That is, the gap between the lower and upper bound provided by our explicit factorization is $0.047 + o(1)$.

Normalized Square Root: Sharper Matrix Factorization Bounds for Differentially Private Continual Counting

Monika Henzinger, Nikita P. Kalinin, Jalaj Upadhyay

http://arxiv.org/abs/2509.14334

19.09.2025 03:48 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

πŸ“’Adam Smith, @gautamkamath.com, and I are putting together a list of job market candidates in Foundations of Responsible Computing! Last year's list was a great success so we're keeping it going!

If you want to be included, or nominate someone, see link in the replies!

15.09.2025 12:41 β€” πŸ‘ 13    πŸ” 7    πŸ’¬ 2    πŸ“Œ 2
Differentially Private Synthetic Graphs Preserving Triangle-Motif Cuts

Pan Peng, Hangyu Xu

http://arxiv.org/abs/2507.14835

We study the problem of releasing a differentially private (DP) synthetic
graph $G'$ that well approximates the triangle-motif sizes of all cuts of any
given graph $G$, where a motif in general refers to a frequently occurring
subgraph within complex networks. Non-private versions of such graphs have
found applications in diverse fields such as graph clustering, graph
sparsification, and social network analysis. Specifically, we present the first
$(\varepsilon,\delta)$-DP mechanism that, given an input graph $G$ with $n$
vertices, $m$ edges and local sensitivity of triangles $\ell_{3}(G)$, generates
a synthetic graph $G'$ in polynomial time, approximating the triangle-motif
sizes of all cuts $(S,V\setminus S)$ of the input graph $G$ up to an additive
error of $\tilde{O}(\sqrt{m\ell_{3}(G)}n/\varepsilon^{3/2})$. Additionally, we
provide a lower bound of $\Omega(\sqrt{mn}\ell_{3}(G)/\varepsilon)$ on the
additive error for any DP algorithm that answers the triangle-motif size
queries of all $(S,T)$-cut of $G$. Finally, our algorithm generalizes to
weighted graphs, and our lower bound extends to any $K_h$-motif cut for any
constant $h\geq 2$.

Differentially Private Synthetic Graphs Preserving Triangle-Motif Cuts Pan Peng, Hangyu Xu http://arxiv.org/abs/2507.14835 We study the problem of releasing a differentially private (DP) synthetic graph $G'$ that well approximates the triangle-motif sizes of all cuts of any given graph $G$, where a motif in general refers to a frequently occurring subgraph within complex networks. Non-private versions of such graphs have found applications in diverse fields such as graph clustering, graph sparsification, and social network analysis. Specifically, we present the first $(\varepsilon,\delta)$-DP mechanism that, given an input graph $G$ with $n$ vertices, $m$ edges and local sensitivity of triangles $\ell_{3}(G)$, generates a synthetic graph $G'$ in polynomial time, approximating the triangle-motif sizes of all cuts $(S,V\setminus S)$ of the input graph $G$ up to an additive error of $\tilde{O}(\sqrt{m\ell_{3}(G)}n/\varepsilon^{3/2})$. Additionally, we provide a lower bound of $\Omega(\sqrt{mn}\ell_{3}(G)/\varepsilon)$ on the additive error for any DP algorithm that answers the triangle-motif size queries of all $(S,T)$-cut of $G$. Finally, our algorithm generalizes to weighted graphs, and our lower bound extends to any $K_h$-motif cut for any constant $h\geq 2$.

Differentially Private Synthetic Graphs Preserving Triangle-Motif Cuts

Pan Peng, Hangyu Xu

http://arxiv.org/abs/2507.14835

22.07.2025 03:49 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

"Mamma, I am coming home" popped up on my YouTube recommendation today. I could not dare play it.

22.07.2025 22:25 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Breaking the $n^{1.5}$ Additive Error Barrier for Private and Efficient Graph Sparsification via Private Expander Decomposition

Anders Aamand, Justin Y. Chen, Mina Dalirrooyfard, Slobodan Mitrović, Yuriy Nevmyvaka, Sandeep Silwal, Yinzhan Xu

http://arxiv.org/abs/2507.01873

We study differentially private algorithms for graph cut sparsification, a
fundamental problem in algorithms, privacy, and machine learning. While
significant progress has been made, the best-known private and efficient cut
sparsifiers on $n$-node graphs approximate each cut within
$\widetilde{O}(n^{1.5})$ additive error and $1+\gamma$ multiplicative error for
any $\gamma > 0$ [Gupta, Roth, Ullman TCC'12]. In contrast, "inefficient"
algorithms, i.e., those requiring exponential time, can achieve an
$\widetilde{O}(n)$ additive error and $1+\gamma$ multiplicative error
[Eli{\'a}{\v{s}}, Kapralov, Kulkarni, Lee SODA'20]. In this work, we break the
$n^{1.5}$ additive error barrier for private and efficient cut sparsification.
We present an $(\varepsilon,\delta)$-DP polynomial time algorithm that, given a
non-negative weighted graph, outputs a private synthetic graph approximating
all cuts with multiplicative error $1+\gamma$ and additive error $n^{1.25 +
o(1)}$ (ignoring dependencies on $\varepsilon, \delta, \gamma$).
  At the heart of our approach lies a private algorithm for expander
decomposition, a popular and powerful technique in (non-private) graph
algorithms.

Breaking the $n^{1.5}$ Additive Error Barrier for Private and Efficient Graph Sparsification via Private Expander Decomposition Anders Aamand, Justin Y. Chen, Mina Dalirrooyfard, Slobodan Mitrović, Yuriy Nevmyvaka, Sandeep Silwal, Yinzhan Xu http://arxiv.org/abs/2507.01873 We study differentially private algorithms for graph cut sparsification, a fundamental problem in algorithms, privacy, and machine learning. While significant progress has been made, the best-known private and efficient cut sparsifiers on $n$-node graphs approximate each cut within $\widetilde{O}(n^{1.5})$ additive error and $1+\gamma$ multiplicative error for any $\gamma > 0$ [Gupta, Roth, Ullman TCC'12]. In contrast, "inefficient" algorithms, i.e., those requiring exponential time, can achieve an $\widetilde{O}(n)$ additive error and $1+\gamma$ multiplicative error [Eli{\'a}{\v{s}}, Kapralov, Kulkarni, Lee SODA'20]. In this work, we break the $n^{1.5}$ additive error barrier for private and efficient cut sparsification. We present an $(\varepsilon,\delta)$-DP polynomial time algorithm that, given a non-negative weighted graph, outputs a private synthetic graph approximating all cuts with multiplicative error $1+\gamma$ and additive error $n^{1.25 + o(1)}$ (ignoring dependencies on $\varepsilon, \delta, \gamma$). At the heart of our approach lies a private algorithm for expander decomposition, a popular and powerful technique in (non-private) graph algorithms.

Breaking the $n^{1.5}$ Additive Error Barrier for Private and Efficient Graph Sparsification via Private Expander Decomposition

Anders Aamand, Justin Y. Chen, Mina Dalirrooyfard, Slobodan Mitrović, Yuriy Nevmyvaka, Sandeep Silwal, Yinzhan Xu

http://arxiv.org/abs/2507.01873

03.07.2025 03:48 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
On Design Principles for Private Adaptive Optimizers

Arun Ganesh, Brendan McMahan, Abhradeep Thakurta

http://arxiv.org/abs/2507.01129

The spherical noise added to gradients in differentially private (DP)
training undermines the performance of adaptive optimizers like AdaGrad and
Adam, and hence many recent works have proposed algorithms to address this
challenge. However, the empirical results in these works focus on simple tasks
and models and the conclusions may not generalize to model training in
practice. In this paper we survey several of these variants, and develop better
theoretical intuition for them as well as perform empirical studies comparing
them. We find that a common intuition of aiming for unbiased estimates of
second moments of gradients in adaptive optimizers is misguided, and instead
that a simple technique called scale-then-privatize (which does not achieve
unbiased second moments) has more desirable theoretical behaviors and
outperforms all other variants we study on a small-scale language model
training task. We additionally argue that scale-then-privatize causes the noise
addition to better match the application of correlated noise mechanisms which
are more desirable to use in practice.

On Design Principles for Private Adaptive Optimizers Arun Ganesh, Brendan McMahan, Abhradeep Thakurta http://arxiv.org/abs/2507.01129 The spherical noise added to gradients in differentially private (DP) training undermines the performance of adaptive optimizers like AdaGrad and Adam, and hence many recent works have proposed algorithms to address this challenge. However, the empirical results in these works focus on simple tasks and models and the conclusions may not generalize to model training in practice. In this paper we survey several of these variants, and develop better theoretical intuition for them as well as perform empirical studies comparing them. We find that a common intuition of aiming for unbiased estimates of second moments of gradients in adaptive optimizers is misguided, and instead that a simple technique called scale-then-privatize (which does not achieve unbiased second moments) has more desirable theoretical behaviors and outperforms all other variants we study on a small-scale language model training task. We additionally argue that scale-then-privatize causes the noise addition to better match the application of correlated noise mechanisms which are more desirable to use in practice.

On Design Principles for Private Adaptive Optimizers

Arun Ganesh, Brendan McMahan, Abhradeep Thakurta

http://arxiv.org/abs/2507.01129

03.07.2025 03:49 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Post image

ICML's election for their board of directors has begun. I've thrown my hat in the ring. Please consider voting for Gautam Kamath.

I have experience with the governance of TMLR, COLT, and ALT, and I think I've demonstrated myself as a consciencious and engaged community member.

30.06.2025 12:44 β€” πŸ‘ 30    πŸ” 5    πŸ’¬ 0    πŸ“Œ 1
Preview
TCS for All Theoretical Computer Science without Barriers

This year TCS for All Inspiration talk will be given by Sofya Raskhodnikova, Boston University on June 27th at our STOC 2025 TCS for All Meeting. Join us. We are relocating the TCS for All Rising Star Workshop to FOCS 2025 this year. Stay tuned. SIGACT.org/tcsforall/
#stoc2025 @ccanonne.github.io

18.06.2025 02:33 β€” πŸ‘ 18    πŸ” 7    πŸ’¬ 0    πŸ“Œ 1
Correlated Noise Mechanisms for Differentially Private Learning

Krishna Pillutla, Jalaj Upadhyay, Christopher A. Choquette-Choo, Krishnamurthy Dvijotham, Arun Ganesh, Monika Henzinger, Jonathan Katz, Ryan McKenna, H. Brendan McMahan, Keith Rush, Thomas Steinke, Abhradeep Thakurta

http://arxiv.org/abs/2506.08201

This monograph explores the design and analysis of correlated noise
mechanisms for differential privacy (DP), focusing on their application to
private training of AI and machine learning models via the core primitive of
estimation of weighted prefix sums. While typical DP mechanisms inject
independent noise into each step of a stochastic gradient (SGD) learning
algorithm in order to protect the privacy of the training data, a growing body
of recent research demonstrates that introducing (anti-)correlations in the
noise can significantly improve privacy-utility trade-offs by carefully
canceling out some of the noise added on earlier steps in subsequent steps.
Such correlated noise mechanisms, known variously as matrix mechanisms,
factorization mechanisms, and DP-Follow-the-Regularized-Leader (DP-FTRL) when
applied to learning algorithms, have also been influential in practice, with
industrial deployment at a global scale.

Correlated Noise Mechanisms for Differentially Private Learning Krishna Pillutla, Jalaj Upadhyay, Christopher A. Choquette-Choo, Krishnamurthy Dvijotham, Arun Ganesh, Monika Henzinger, Jonathan Katz, Ryan McKenna, H. Brendan McMahan, Keith Rush, Thomas Steinke, Abhradeep Thakurta http://arxiv.org/abs/2506.08201 This monograph explores the design and analysis of correlated noise mechanisms for differential privacy (DP), focusing on their application to private training of AI and machine learning models via the core primitive of estimation of weighted prefix sums. While typical DP mechanisms inject independent noise into each step of a stochastic gradient (SGD) learning algorithm in order to protect the privacy of the training data, a growing body of recent research demonstrates that introducing (anti-)correlations in the noise can significantly improve privacy-utility trade-offs by carefully canceling out some of the noise added on earlier steps in subsequent steps. Such correlated noise mechanisms, known variously as matrix mechanisms, factorization mechanisms, and DP-Follow-the-Regularized-Leader (DP-FTRL) when applied to learning algorithms, have also been influential in practice, with industrial deployment at a global scale.

Correlated Noise Mechanisms for Differentially Private Learning

Krishna Pillutla, Jalaj Upadhyay, Christopher A. Choquette-Choo, Krishnamurthy Dvijotham, Arun Ganesh, Monika Henzinger, Jonathan Katz, Ryan McKenna, H. Brendan McMahan, Keith Rush, Thomas Steinke, Ab...
http://arxiv.org/abs/2506.08201

11.06.2025 15:48 β€” πŸ‘ 8    πŸ” 2    πŸ’¬ 0    πŸ“Œ 1

You should ask Andrew Wiles and get his input on this vacuum :) He probably will have the best answer.

08.04.2025 16:03 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
DIMACS :: Details

DIMACS workshop on fine grained hardness of approximation, July 21-23
Information and registration here, looks interesting!
dmac.rutgers.edu/events/detai...

09.02.2025 18:20 β€” πŸ‘ 11    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0
Differentially Private Matchings
Michael Dinitz, George Z. Li, Quanquan C. Liu, Felix Zhou
http://arxiv.org/abs/2501.00926
Computing matchings in general graphs plays a central role in graph
algorithms. However, despite the recent interest in differentially private
graph algorithms, there has been limited work on private matchings. Moreover,
almost all existing work focuses on estimating the size of the maximum
matching, whereas in many applications, the matching itself is the object of
interest. There is currently only a single work on private algorithms for
computing matching solutions by [HHRRW STOC'14]. Moreover, their work focuses
on allocation problems and hence is limited to bipartite graphs.
  Motivated by the importance of computing matchings in sensitive graph data,
we initiate the study of differentially private algorithms for computing
maximal and maximum matchings in general graphs. We provide a number of
algorithms and lower bounds for this problem in different models and settings.
We first prove a lower bound showing that computing explicit solutions
necessarily incurs large error, even if we try to obtain privacy by allowing
ourselves to output non-edges. We then consider implicit solutions, where at
the end of the computation there is an ($\varepsilon$-differentially private)
billboard and each node can determine its matched edge(s) based on what is
written on this publicly visible billboard. For this solution concept, we
provide tight upper and lower (bicriteria) bounds, where the degree bound is
violated by a logarithmic factor (which we show is necessary). We further show
that our algorithm can be made distributed in the local edge DP (LEDP) model,
and can even be done in a logarithmic number of rounds if we further relax the
degree bounds by logarithmic factors. Our edge-DP matching algorithms give rise
to new matching algorithms in the node-DP setting by combining our edge-DP
algorithms with a novel use of arboricity sparsifiers. [...]

Differentially Private Matchings Michael Dinitz, George Z. Li, Quanquan C. Liu, Felix Zhou http://arxiv.org/abs/2501.00926 Computing matchings in general graphs plays a central role in graph algorithms. However, despite the recent interest in differentially private graph algorithms, there has been limited work on private matchings. Moreover, almost all existing work focuses on estimating the size of the maximum matching, whereas in many applications, the matching itself is the object of interest. There is currently only a single work on private algorithms for computing matching solutions by [HHRRW STOC'14]. Moreover, their work focuses on allocation problems and hence is limited to bipartite graphs. Motivated by the importance of computing matchings in sensitive graph data, we initiate the study of differentially private algorithms for computing maximal and maximum matchings in general graphs. We provide a number of algorithms and lower bounds for this problem in different models and settings. We first prove a lower bound showing that computing explicit solutions necessarily incurs large error, even if we try to obtain privacy by allowing ourselves to output non-edges. We then consider implicit solutions, where at the end of the computation there is an ($\varepsilon$-differentially private) billboard and each node can determine its matched edge(s) based on what is written on this publicly visible billboard. For this solution concept, we provide tight upper and lower (bicriteria) bounds, where the degree bound is violated by a logarithmic factor (which we show is necessary). We further show that our algorithm can be made distributed in the local edge DP (LEDP) model, and can even be done in a logarithmic number of rounds if we further relax the degree bounds by logarithmic factors. Our edge-DP matching algorithms give rise to new matching algorithms in the node-DP setting by combining our edge-DP algorithms with a novel use of arboricity sparsifiers. [...]

Differentially Private Matchings
Michael Dinitz, George Z. Li, Quanquan C. Liu, Felix Zhou
http://arxiv.org/abs/2501.00926

06.01.2025 04:33 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Career Opportunities – University of Copenhagen

Looking for a PhD position or a post-doc position in theoretical computer science? Consider applying to one of our calls, closing January 10! barc.ku.dk/career/

06.01.2025 07:20 β€” πŸ‘ 8    πŸ” 8    πŸ’¬ 0    πŸ“Œ 0
Post image

Happy new year! Guest post on my blog by Abhradeep Thakurta, featuring his perspective on interviewing/hiring for faculty and industry research positions in CS/ML. I add some of my own comments at the end. Comments and perspectives welcome!

kamathematics.wordpress.com/2025/01/02/g...

02.01.2025 15:27 β€” πŸ‘ 15    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Streaming Private Continual Counting via Binning
Joel Daniel Andersson, Rasmus Pagh
http://arxiv.org/abs/2412.07093
In differential privacy, $\textit{continual observation}$ refers to problems
in which we wish to continuously release a function of a dataset that is
revealed one element at a time. The challenge is to maintain a good
approximation while keeping the combined output over all time steps
differentially private. In the special case of $\textit{continual counting}$ we
seek to approximate a sum of binary input elements. This problem has received
considerable attention lately, in part due to its relevance in implementations
of differentially private stochastic gradient descent. $\textit{Factorization
mechanisms}$ are the leading approach to continual counting, but the best such
mechanisms do not work well in $\textit{streaming}$ settings since they require
space proportional to the size of the input. In this paper, we present a simple
approach to approximating factorization mechanisms in low space via
$\textit{binning}$, where adjacent matrix entries with similar values are
changed to be identical in such a way that a matrix-vector product can be
maintained in sublinear space. Our approach has provable sublinear space
guarantees for a class of lower triangular matrices whose entries are
monotonically decreasing away from the diagonal. We show empirically that even
with very low space usage we are able to closely match, and sometimes surpass,
the performance of asymptotically optimal factorization mechanisms. Recently,
and independently of our work, Dvijotham et al. have also suggested an approach
to implementing factorization mechanisms in a streaming setting. Their work
differs from ours in several respects: It only addresses factorization into
$\textit{Toeplitz}$ matrices, only considers $\textit{maximum}$ error, and uses
a different technique based on rational function approximation that seems less
versatile than our binning approach.

Streaming Private Continual Counting via Binning Joel Daniel Andersson, Rasmus Pagh http://arxiv.org/abs/2412.07093 In differential privacy, $\textit{continual observation}$ refers to problems in which we wish to continuously release a function of a dataset that is revealed one element at a time. The challenge is to maintain a good approximation while keeping the combined output over all time steps differentially private. In the special case of $\textit{continual counting}$ we seek to approximate a sum of binary input elements. This problem has received considerable attention lately, in part due to its relevance in implementations of differentially private stochastic gradient descent. $\textit{Factorization mechanisms}$ are the leading approach to continual counting, but the best such mechanisms do not work well in $\textit{streaming}$ settings since they require space proportional to the size of the input. In this paper, we present a simple approach to approximating factorization mechanisms in low space via $\textit{binning}$, where adjacent matrix entries with similar values are changed to be identical in such a way that a matrix-vector product can be maintained in sublinear space. Our approach has provable sublinear space guarantees for a class of lower triangular matrices whose entries are monotonically decreasing away from the diagonal. We show empirically that even with very low space usage we are able to closely match, and sometimes surpass, the performance of asymptotically optimal factorization mechanisms. Recently, and independently of our work, Dvijotham et al. have also suggested an approach to implementing factorization mechanisms in a streaming setting. Their work differs from ours in several respects: It only addresses factorization into $\textit{Toeplitz}$ matrices, only considers $\textit{maximum}$ error, and uses a different technique based on rational function approximation that seems less versatile than our binning approach.

Streaming Private Continual Counting via Binning
Joel Daniel Andersson, Rasmus Pagh
http://arxiv.org/abs/2412.07093

11.12.2024 04:34 β€” πŸ‘ 10    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Streaming Private Continual Counting via Binning In differential privacy, $\textit{continual observation}$ refers to problems in which we wish to continuously release a function of a dataset that is revealed one element at a time. The challenge is t...

Another cool work by Joel Andersson and @rasmuspagh.net on low-space continual counting. This result is more general than that in the recent FOCS paper. arxiv.org/abs/2412.07093

11.12.2024 08:07 β€” πŸ‘ 6    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

If you are on the job market, and work on the kinds of topics that might be published in FORC ("Foundations of Responsible Computing"), fill out this form and get on the job list. This covers mathematical research on privacy, fairness, and related topics.

18.11.2024 01:47 β€” πŸ‘ 19    πŸ” 12    πŸ’¬ 0    πŸ“Œ 0

@jalajupadhyay is following 19 prominent accounts