Florentin Guth's Avatar

Florentin Guth

@florentinguth.bsky.social

Postdoc at NYU CDS and Flatiron CCN. Wants to understand why deep learning works.

161 Followers  |  109 Following  |  17 Posts  |  Joined: 13.11.2024  |  1.9282

Latest posts by florentinguth.bsky.social on Bluesky

Post image

πŸ”₯ Mark your calendars for the next session of the @ellis.eu x UniReps Speaker Series!

πŸ—“οΈ When: 31th July – 16:00 CEST
πŸ“ Where: ethz.zoom.us/j/66426188160
πŸŽ™οΈ Speakers: Keynote by
@pseudomanifold.topology.rocks & Flash Talk by @florentinguth.bsky.social

23.07.2025 08:37 β€” πŸ‘ 15    πŸ” 5    πŸ’¬ 1    πŸ“Œ 2

Next appointment: 31st July 2025 – 16:00 CEST on Zoom with πŸ”΅Keynote: @pseudomanifold.topology.rocks (University of Fribourg) πŸ”΄ @florentinguth.bsky.social (NYU & Flatiron)

10.07.2025 08:47 β€” πŸ‘ 2    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

What I meant is that there are generalizations of the CLT to infinite variance. The limit is then an alpha stable distribution (includes Gaussian, Cauchy, but not Gumbel). Also, even if x is heavy tailed then log p(x) is typically not. So a product of Cauchy distributions has a Gaussian log p(x)!

08.06.2025 18:39 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

At the same time, there are simple distributions that have Gumbel-distributed log probabilities. The simplest example I could find is a Gaussian scale mixture where the variance is distributed like an exponential variable. So it is not clear if we will be able to say something more about this! 2/2

08.06.2025 17:00 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

If you have independent components, even if heavy-tailed, then log p(x) is a sum of iid variables and is thus distributed according to a (sum) stable law. A conjecture is that the minimum comes from a logsumexp, so a mixture distribution (sum of p) rather than a product (sum of log p). 1/2

08.06.2025 17:00 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

For a more in-depth discussion of the approach and results (and more!): arxiv.org/pdf/2506.05310

06.06.2025 22:11 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Finally, we test the manifold hypothesis: what is the local dimensionality around an image? We find that this depends both on the image and the size of the local neighborhood, and there exists images with both large full-dimensional and small low-dimensional neighborhoods.

06.06.2025 22:11 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

High probability β‰  typicality: very high-probability images are rare. This is not a contradiction: frequency = probability density *multiplied by volume*, and volume is weird in high dimensions! Also, the log probabilities are Gumbel-distributed, and we don't know why!

06.06.2025 22:11 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0
Post image

These are the highest and lowest probability images in ImageNet64. An interpretation is that -log2 p(x) is the size in bits of the optimal compression of x: higher probability images are more compressible. Also, the probability ratio between these is 10^14,000! 🀯

06.06.2025 22:11 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

But how do we know our probability model is accurate on real data?
In addition to computing cross-entropy/NLL, we show *strong* generalization: models trained on *disjoint* subsets of the data predict the *same* probabilities if the training set is large enough!

06.06.2025 22:11 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We call this approach "dual score matching". The time derivative constrains the learned energy to satisfy the diffusion equation, which enables recovery of accurate and *normalized* log probability values, even in high-dimensional multimodal distributions.

06.06.2025 22:11 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We also propose a simple procedure to obtain good network architectures for the energy U: choose any pre-existing score network s and simply take the inner product with the input image y! We show that this preserves the inductive biases of the base score network: grad_y U β‰ˆ s.

06.06.2025 22:11 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

How do we train an energy model?
Inspired by diffusion models, we learn the energy of both clean and noisy images along a diffusion. It is optimized via a sum of two score matching objectives, which constrain its derivatives with both the image (space) and the noise level (time).

06.06.2025 22:11 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

What is the probability of an image? What do the highest and lowest probability images look like? Do natural images lie on a low-dimensional manifold?
In a new preprint with Zahra Kadkhodaie and @eerosim.bsky.social, we develop a novel energy-based model in order to answer these questions: 🧡

06.06.2025 22:11 β€” πŸ‘ 70    πŸ” 23    πŸ’¬ 1    πŸ“Œ 1

🌈 I'll be presenting our JMLR paper "A rainbow in deep network black boxes" today at 3pm at @iclr-conf.bsky.social!
Come to poster #334 if you're interested, I'll be happy to chat
More details in the threads on the other website: x.com/FlorentinGut...

25.04.2025 00:35 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

This also manifests in what operator space and norm you're considering. Here you have bounded operators with operator norm or trace-class operators with nuclear norm. This matters a lot in infinite dimensions but also in finite but large dimensions!

10.04.2025 21:32 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

A loose thought that's been bubbling around for me recently: when you think of a 'generic' big matrix, you might think of it as being close to low-rank (e.g. kernel matrices), or very far from low-rank (e.g. the typical scope of random matrix theory). Intuition ought to be quite different in each.

10.04.2025 21:23 β€” πŸ‘ 12    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Absolutely! Their behavior is quite different (e.g., consistency of eigenvalues and eigenvectors in the proportional asymptotic regime). You also want to use different objects to describe them: eigenvalues should be thought either as a non-increasing sequence or as samples from a distribution.

10.04.2025 21:29 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
SciForDL'24

Speaking at this #NeurIPS2024 workshop on a new analytic theory of creativity in diffusion models that predicts what new images they will create and explains how these images are constructed as patch mosaics of the training data. Great work by @masonkamb.bsky.social
scienceofdlworkshop.github.io

14.12.2024 17:01 β€” πŸ‘ 42    πŸ” 3    πŸ’¬ 0    πŸ“Œ 2
Post image

Excited to present work with @jfeather.bsky.social @eerosim.bsky.social and @sueyeonchung.bsky.social today at Neurips!

May do a proper thread later on, but come by or shoot me a message if you are in Vancouver and want to chat :)

Brief details in post below

12.12.2024 16:08 β€” πŸ‘ 15    πŸ” 4    πŸ’¬ 1    πŸ“Œ 1

Some more random conversation topics:
- what we should do to improve/replace these huge conferences
- replica method and other statphys-inspired high-dim probability (finally trying to understand what the fuss is about)
- textbooks that have been foundational/transformative for your work

09.12.2024 01:02 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I'll be at @neuripsconf.bsky.social from Tuesday to Sunday!

Feel free to reach out (Whova, email, DM) if you want to chat about scientific/theoretical understanding of deep learning, diffusion models, or more! (see below)

And check out our Sci4DL workshop on Sunday: scienceofdlworkshop.github.io

09.12.2024 01:02 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@florentinguth is following 20 prominent accounts