Rasmus Pagh's Avatar

Rasmus Pagh

@rasmuspagh.net.bsky.social

Professor of computer science at University of Copenhagen. Interested in random things & their application (especially to algorithms and privacy). rasmuspagh.net

559 Followers  |  310 Following  |  117 Posts  |  Joined: 15.11.2024  |  1.9291

Latest posts by rasmuspagh.net on Bluesky

Another great piece in @quantamagazine.bsky.social, this time on new surprising algorithms for shortest paths. With quotes from BARC head and shortest path grand-old-man Mikkel Thorup

09.08.2025 05:31 β€” πŸ‘ 13    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Call for workshops – FOCS 2025

The Call for Workshops for #FOCS2025 is up! Submit a proposal by September 5!

πŸ“‹ focs.computer.org/2025/call-fo...

Workshop chairs: Mohsen Ghaffari and Dakshita Khurana

06.08.2025 02:19 β€” πŸ‘ 3    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Preview
SAT requires exhaustive search - Frontiers of Computer Science In this paper, we identify the distinction between non-brute-force computation and brute-force computation as the most fundamental problem in computer science. Subsequently, we prove, by the diagonali...

The ”Ethics declaration” is interesting and may shed some light on why the EiC was reluctant

link.springer.com/article/10.1...

05.08.2025 08:08 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Dara

DARA (Danish Advanced Research Academy) is a new initiative in which exceptional STEM candidates can get fully funded PhD scholarships at Danish universities. Those who are interested in applying should reach out to potential supervisors as soon as possible. daracademy.dk/fellowship/f...

28.06.2025 10:27 β€” πŸ‘ 9    πŸ” 0    πŸ’¬ 0    πŸ“Œ 1
First European Workshop on Differential Privacy
September 17, 2026
At IST Austria

First European Workshop on Differential Privacy September 17, 2026 At IST Austria

Finally a workshop on Differential Privacy in Europe! Organized by Monika Henzinger at IST Austria next year in September, so reserve the date! Monika’s STOC keynote, where the workshop was announced, was about the intersection of privacy and dynamic algorithms. Lots of questions in this space!

26.06.2025 11:57 β€” πŸ‘ 8    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Jonas Klausen hashing out the details at his PhD defense at BARC yesterday. Congratulations to Jonas for a successful defense!

13.06.2025 06:25 β€” πŸ‘ 9    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Comprehensive survey of correlated noise mechanisms and their application in private machine learning!

12.06.2025 06:03 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Private Lossless Multiple Release Koufogiannis et al. (2016) showed a $\textit{gradual release}$ result for Laplace noise-based differentially private mechanisms: given an $\varepsilon$-DP release, a new release with privacy parameter...

In an upcoming #icml paper with @jdandersson.net, @lukasretschmeier.de, and Boel Nelson we extend this line of work to more general settings, e.g. allowing releases to be made adaptively in arbitrary order and supporting new noise distributions. The paper is now on arXiv: arxiv.org/abs/2505.22449

30.05.2025 06:26 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

In 2015-2016 Koufogiannis, Han, and Pappas showed that multiple releases of Laplace and Gaussian mechanisms can be correlated such that having access to multiple releases exposes no more information than the least private release. In this sense we can create many releases with no loss in privacy!

30.05.2025 06:26 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Differential privacy usually deals with releasing information about a dataset at a fixed privacy level with as high utility as possible. In some settings there is a need to have *multiple* releases at different levels of privacy/trust, but how can this be done without degrading privacy?

30.05.2025 06:26 β€” πŸ‘ 13    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
For Algorithms, a Little Memory Outweighs a Lot of Time | Quanta Magazine One computer scientist’s β€œstunning” proof is the first progress in 50 years on one of the most famous questions in computer science.

In case you didn't see it! www.quantamagazine.org/for-algorith...

22.05.2025 19:01 β€” πŸ‘ 2    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Preview
PhD fellowship in Computer Science, PhD Project in Privacy and Cybersecurity PhD fellowship in Computer Science, PhD Project in Privacy and Cybersecurity Department of Computer Science, Faculty of SCIENCE, University of Copenhagen  

Are you an MSc student interested in privacy and security? Consider applying for this PhD position on metadata privacy with Boel Nelson
candidate.hr-manager.net/ApplicationI...

21.05.2025 11:29 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Back to Square Roots: An Optimal Bound on the Matrix Factorization Error for Multi-Epoch Differentially Private SGD

Nikita P. Kalinin, Ryan McKenna, Jalaj Upadhyay, Christoph H. Lampert

http://arxiv.org/abs/2505.12128

Matrix factorization mechanisms for differentially private training have
emerged as a promising approach to improve model utility under privacy
constraints. In practical settings, models are typically trained over multiple
epochs, requiring matrix factorizations that account for repeated
participation. Existing theoretical upper and lower bounds on multi-epoch
factorization error leave a significant gap. In this work, we introduce a new
explicit factorization method, Banded Inverse Square Root (BISR), which imposes
a banded structure on the inverse correlation matrix. This factorization
enables us to derive an explicit and tight characterization of the multi-epoch
error. We further prove that BISR achieves asymptotically optimal error by
matching the upper and lower bounds. Empirically, BISR performs on par with
state-of-the-art factorization methods, while being simpler to implement,
computationally efficient, and easier to analyze.

Back to Square Roots: An Optimal Bound on the Matrix Factorization Error for Multi-Epoch Differentially Private SGD Nikita P. Kalinin, Ryan McKenna, Jalaj Upadhyay, Christoph H. Lampert http://arxiv.org/abs/2505.12128 Matrix factorization mechanisms for differentially private training have emerged as a promising approach to improve model utility under privacy constraints. In practical settings, models are typically trained over multiple epochs, requiring matrix factorizations that account for repeated participation. Existing theoretical upper and lower bounds on multi-epoch factorization error leave a significant gap. In this work, we introduce a new explicit factorization method, Banded Inverse Square Root (BISR), which imposes a banded structure on the inverse correlation matrix. This factorization enables us to derive an explicit and tight characterization of the multi-epoch error. We further prove that BISR achieves asymptotically optimal error by matching the upper and lower bounds. Empirically, BISR performs on par with state-of-the-art factorization methods, while being simpler to implement, computationally efficient, and easier to analyze.

Back to Square Roots: An Optimal Bound on the Matrix Factorization Error for Multi-Epoch Differentially Private SGD

Nikita P. Kalinin, Ryan McKenna, Jalaj Upadhyay, Christoph H. Lampert

http://arxiv.org/abs/2505.12128

20.05.2025 03:50 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

I heard the research submitted to #neurips is so hot they need about 2000 ACs to manage it

19.05.2025 15:41 β€” πŸ‘ 14    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

β€œDet Γ€r klart man Γ€r lite fΓΆrvΓ₯nad. Det kan man ju inte sticka under stol med, sΓ€ger Axel Γ…hman i KAJ.” Som att bli fΓΆrvΓ₯nad ΓΆver utfallet nΓ€r man singlar slant…

18.05.2025 09:28 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

It’s a stack of experiments, I guess, repeated many times until they finally reject the null hypothesis

06.05.2025 08:17 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
not exactly rocket scientists | Far side comics, The far side, Far side cartoons This Pin was discovered by Jamie Bailey. Discover (and save!) your own Pins on Pinterest

Original: pin.it/4TIZhH1vi

06.05.2025 05:40 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Scientists standing in front of a bunch of p-values, most large but one smaller than 0.05. 
Caption: It's time we face reality, my friends... We're not exactly rugged scientists.

Scientists standing in front of a bunch of p-values, most large but one smaller than 0.05. Caption: It's time we face reality, my friends... We're not exactly rugged scientists.

With apologies to Gary Larson

05.05.2025 16:11 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
SODA 2026

The Call for Papers (CfP) for #SODA26 is out: www.siam.org/conferences-...

The submission server is open: soda26.hotcrp.com

Deadline: ⏰ Monday, July 14, AoE (July 15, 11:59am UTC)

30.04.2025 08:13 β€” πŸ‘ 14    πŸ” 11    πŸ’¬ 0    πŸ“Œ 0

- Research is needed not just on core ML methods, but on AI in combination with cybersecurity, formal verification, human-computer interaction, and more.

The post explicitly does not address AI for military use that would "require a deeper analysis"

22.04.2025 08:00 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

- AI has potential for harmful use but can also be used to defend against harm; ensuring a playing field giving defensive use an advantage is key
- Trying to limit the development of capability is likely to be counterproductive
- Regulation should address AI application, not capability

22.04.2025 08:00 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Post authors @randomwalker.bsky.social and @sayash.bsky.social do a great job at articulating the view. Some takeaways:
- Many societal issues associated with AI are not new, but have arisen in other contexts
- We can build on existing approaches to mitigating risk of harm through regulation

22.04.2025 08:00 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

"AI as normal technology" captures a view that many of us hold but has not been very visible in the debates around AI. I used Substack's text-to-audio function for the first time to listen to www.aisnakeoil.com/p/ai-as-norm... (skipping the last 30 min. reading the reference list...)

22.04.2025 08:00 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Privacy Amplification by Subsampling Privacy Amplification by Subsampling is an important property of differential privacy. It is key to making many algorithms efficient – particularly in machine learning applications. Thus a lot of wor...

Insightful pair of blog posts by @stein.ke on #privacy amplification by subsampling: instead of running your privacy-preserving algo on the full dataset, run it on a random subset. Better!(?) Why, and what the limitations?

differentialprivacy.org/subsampling/
differentialprivacy.org/subsampling-...

21.04.2025 23:19 β€” πŸ‘ 15    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
ESA – ALGO2025

European Sympsium on Algorithms 2025 will be held in Warsaw in September, as part of ALGO 2025. Do you have great work on design and analysis of algorithms? Submit it by April 23! algo-conference.org/2025/esa/

08.04.2025 14:45 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
FORC 2025: Accepted Papers Hardness and Approximation Algorithms for Balanced Districting ProblemsPrathamesh Dharangutte, Jie Gao, Shang-En Huang and Fang-Yi Yu Privacy-Computation Trade-Offs in Private Repetition and Metase…

FORC 2025 accepted papers, lots of interesting stuff! responsiblecomputing.org/forc-2025-ac...

03.04.2025 11:58 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Ioana O. Bercea on X: "I’m recruiting! Please help spread the word! 2PhDs available at KTH Royal Institute of Technology, in the area of Theoretical Computer Science. Deadline is April 3rd, 2025. More details here: https://t.co/rbDT01n5XI" / X I’m recruiting! Please help spread the word! 2PhDs available at KTH Royal Institute of Technology, in the area of Theoretical Computer Science. Deadline is April 3rd, 2025. More details here: https://t.co/rbDT01n5XI

Opportunity to do a PhD in a wonderful city, at a great university, with a fantastic supervisor! x.com/ioanabercea/...

20.03.2025 08:52 β€” πŸ‘ 0    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Preview
A glossary of differential privacy terms - Ted is writing things A list of short definitions of commonly used terms in differential privacy, with references for further reading.

I got annoyed couldn't find a good glossary of DP terms, so I wrote one: https://desfontain.es/blog/differential-privacy-glossary.html ✨

Simple definitions, links to further reading, and some mild snark about how bad we are at naming things sometimes πŸ™ƒ

10.03.2025 20:16 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Trump’s Latest Weapon Against Critics: Destroying Their Lawyers When a president uses executive power to not just blacklist but effectively destroy a major law firm, solely for representing political opponents, it means he’s given up any pretense that he’s not …

There are a million horrible things happening all at once, but wanted to take a moment to focus on the exec order against Perkins Coie and how it's an attempt to make it difficult for anyone to use the legal process to challenge all this law breaking.

www.techdirt.com/2025/03/11/t...

11.03.2025 18:33 β€” πŸ‘ 1446    πŸ” 512    πŸ’¬ 30    πŸ“Œ 28
Post image

🧡New paper on arXiv: Optimal Differentially Private Sampling of Unbounded Gaussians.

With @uwcheritoncs.bsky.social undergrad Valentio Iverson and PhD student Argyris Mouzakis (@argymouz.bsky.social).

The first O(d) algorithm for privately sampling arbitrary Gaussians! arxiv.org/abs/2503.01766 1/n

04.03.2025 15:25 β€” πŸ‘ 14    πŸ” 5    πŸ’¬ 1    πŸ“Œ 2

@rasmuspagh.net is following 20 prominent accounts