Paul Hagemann 's Avatar

Paul Hagemann

@yungbayesian.bsky.social

PhD student at TU Berlin, working on generative models and inverse problems he/him

833 Followers  |  426 Following  |  26 Posts  |  Joined: 16.11.2024  |  2.3558

Latest posts by yungbayesian.bsky.social on Bluesky

We are looking for someone to join the group as a postdoc to help us with scaling implicit transfer operators. If you are interested in this, please reach out to me through email. Include CV, with publications and brief motivational statement. RTs appreciated!

27.05.2025 13:23 β€” πŸ‘ 14    πŸ” 8    πŸ’¬ 1    πŸ“Œ 2

what is so misunderstood about (3)?

26.05.2025 17:28 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
Post image

2025 CHAIR Structured Learning Workshop -- Apply to attend: ui.ungpd.com/Events/60bfc...

06.05.2025 10:02 β€” πŸ‘ 9    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

cool work

25.04.2025 14:15 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

best of luck marvin :)

19.03.2025 14:35 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

ist auch logisch, dass greenpeace/foodwatch bspw den grΓΌnen nΓ€herstehen, da die ja die themen bespielen. neutralitΓ€t wΓ€re da ja eher lΓ€cherlich

27.02.2025 13:51 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

was ist an der studie falsch

18.02.2025 23:23 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Score-Based Generative Models Detect Manifolds

interesting point, but i would say (true) memorization is mathematically impossible. the underlying question is what generalization means when we are given finite training samples. it depends on the model and how long you train, see proceedings.neurips.cc/paper_files/... and arxiv.org/abs/2412.20292

18.02.2025 10:56 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

yes i agree, but for diffusion such a constant velocity/score field does not even exist

07.02.2025 13:04 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

so in diffusion models the time schedule is so that we cannot have straight paths velocity fields (i.e., v_t(x_t) is constant in time), as opposed to flow matching/rectified flows where it is possible to obtain such paths (although it requires either OT/rectifying...)

06.02.2025 20:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

yes lol thank you!

23.01.2025 12:08 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Check out our github and give it a try yourself! Lots of potential in exploring stuff like this also to other domains (medical imaging, protein/bio stuff)!

github.com/annegnx/PnP-...

Also credit goes to my awesome collaborators Anne Gagneux, Sego Martin and Gabriele Steidl!

23.01.2025 11:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Compared to diffusion methods, we can handle arbitrary latent distributions and also get (theoretically) straighter paths! We evaluate on multiple image datasets against flow matching+diffusion+standard PnP based restoration methods!

23.01.2025 11:00 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Our algorithm proceeds as follows: we do a gradient step on the data fidelity, reproject onto the flow matching path and then denoise using our flow matching model. This is super cheap to do!

23.01.2025 10:58 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Therefore, we use the plug and play framework and rewrite our velocity field (which predicts a direction) to instead denoise the image x_t (i.e., predict the MMSE image x_1). Then we obtain a "time" conditional PnP version, where we solve do the forward backward PnP at the current time and reproject

23.01.2025 10:57 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Our paper "PnP-Flow: Plug-and-Play Image Restoration with Flow Matching" has been accepted to ICLR 2025. Here a short explainer: We want to restore images (i.e., solve inverse problems) using pretrained velocity fields from flow matching. However, using change of variables is super costly.

23.01.2025 10:53 β€” πŸ‘ 15    πŸ” 5    πŸ’¬ 1    πŸ“Œ 2

very nice paper, only had a quick glimpse, but another aspect is that the optimal score estimator explodes if we approach t -> 0, which NNs ofc cannot replicate. how does this influence the results?

01.01.2025 15:51 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

you might be onto sth haha

28.11.2024 10:26 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

i guess the adam paper is a pretty good indicator how much ml papers are being published. looks like we are saturating since 2021

28.11.2024 10:16 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

same experience here. i am not sure we need actual conference reviewing at all. why do we not all publish on openreview and if i use your paper/build upon/read it, i can write my opinion on it? without the accept reject stamp.

24.11.2024 13:14 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Here, one can see FID results for different beta! Indeed it seems to be fruitful to restrict mass movement in Y for class conditional cifar! We apply this also to other interesting inverse problems, the article can be found at arxiv.org/abs/2403.18705

20.11.2024 09:14 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

We want to approximate this distance with standard OT solvers, and therefore introduce a twisted cost function. With this at hand, we can now do OT flow matching for inverse problems! The factor beta controls how much mass leakage we allow in Y.

20.11.2024 09:12 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

This object has already been of some interest, i.e., it pops up in the theory of gradient flows. It generalizes the KL property quite nicely, and unifies some ideas present in conditional generative modelling. For instance, its dual is the loss usually used in conditional wasserstein gans.

20.11.2024 09:10 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Now does the same hold for the Wasserstein distance? Unfortunately not, since moving mass in Y-direction can be more efficient for some measures. However, we can fix that if we restrict the suitable couplings to ones, that only move mass in Y-direction.

20.11.2024 09:08 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

In a somewhat recent paper we introduced conditional Wasserstein Distances. They generalize a property that basically explains why KL works well for generative modelling, the chain rule of KL!
It says that if one wants to approximate the posterior, one can also minimize the KL between joints.

20.11.2024 09:07 β€” πŸ‘ 15    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I created a starter pack for simulation-based inference (aka. likelihood-free inference).

Let me know if you’d like me to add you.

go.bsky.app/GVnJRoK

17.11.2024 15:14 β€” πŸ‘ 42    πŸ” 18    πŸ’¬ 16    πŸ“Œ 2

would love to be added :)

19.11.2024 00:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

look at my handle haha

19.11.2024 00:43 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

feel the ai

17.11.2024 19:05 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@yungbayesian is following 20 prominent accounts