Luca Eyring's Avatar

Luca Eyring

@lucaeyring.bsky.social

ELLIS PhD student at TU Munich & Helmholtz AI Generative Modeling - Optimal Transport - Representation Learning https://lucaeyring.com/

352 Followers  |  215 Following  |  5 Posts  |  Joined: 18.11.2024  |  1.8812

Latest posts by lucaeyring.bsky.social on Bluesky

Video thumbnail

3/
Noise Hypernetworks: Amortizing Test-Time Compute in Diffusion Models
@lucaeyring.bsky.social , @shyamgopal.bsky.social , Alexey Dosovitskiy, @natanielruiz.bsky.social , @zeynepakata.bsky.social
[Paper]: arxiv.org/abs/2508.09968
[Code]: github.com/ExplainableM...

13.10.2025 14:43 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0
Post image

πŸŽ“PhD Spotlight: Karsten Roth

Celebrate @confusezius.bsky.social , who defended his PhD on June 24th summa cum laude!

🏁 His next stop: Google DeepMind in Zurich!

Join us in celebrating Karsten's achievements and wishing him the best for his future endeavors! πŸ₯³

04.08.2025 14:11 β€” πŸ‘ 9    πŸ” 2    πŸ’¬ 1    πŸ“Œ 1
Post image

From cell lines to full embryos, drug treatments to genetic perturbations, neuron engineering to virtual organoid screens β€” odds are there’s something in it for you!

Built on flow matching, CellFlow can help guide your next phenotypic screen: biorxiv.org/content/10.1101/2025.04.11.648220v1

23.04.2025 09:26 β€” πŸ‘ 17    πŸ” 7    πŸ’¬ 1    πŸ“Œ 1

(4/4) Disentangled Representation Learning with the Gromov-Monge Gap
@lucaeyring.bsky.social will present GMG, a novel regularizer that matches prior distributions with minimal geometric distortion.
πŸ“ Hall 3 + Hall 2B #603
πŸ•˜ Sat Apr 26, 10:00 a.m.–12:30β€―p.m.

22.04.2025 13:52 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Disentangled Representation Learning with the Gromov-Monge Gap Learning disentangled representations from unlabelled data is a fundamental challenge in machine learning. Solving it may unlock other problems, such as generalization, interpretability, or fairness. ...

(3/4) Disentangled Representation Learning with the Gromov-Monge Gap
A fantastic work contributed by Theo Uscidda and @lucaeyring.bsky.social , with @confusezius.bsky.social , @fabiantheis.bsky.social , @zeynepakata.bsky.social , and Marco Cuturi.
πŸ“– [Paper]: arxiv.org/abs/2407.07829

07.04.2025 09:34 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

Happy to share that we have 4 papers to be presented in the coming #ICLR2025 in the beautiful city of #Singapore . Check out our website for more details: eml-munich.de/publications. We will introduce the talented authors with their papers very soon, stay tunedπŸ˜‰

19.03.2025 11:54 β€” πŸ‘ 7    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0
Post image

Thrilled to announce that four papers from our group have been accepted to #CVPR2025 in Nashville! πŸŽ‰ Congrats to all authors & collaborators.
Our work spans multimodal pre-training, model merging, and more.
πŸ“„ Papers & codes: eml-munich.de#publications
See threads for highlights in each paper.
#CVPR

02.04.2025 11:36 β€” πŸ‘ 11    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0

πŸ“„ Disentangled Representation Learning with the Gromov-Monge Gap

with ThΓ©o Uscidda, Luca Eyring, @confusezius.bsky.social, Fabian J Theis, Marco Cuturi

πŸ“„ Decoupling Angles and Strength in Low-rank Adaptation

with Massimo Bini, Leander Girrbach

24.01.2025 20:02 β€” πŸ‘ 10    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

Missing the deep learning part? go check out the follow up work @neuripsconf.bsky.social (tinyurl.com/yvf72kzf) and @iclr-conf.bsky.social (tinyurl.com/4vh8vuzk)

23.01.2025 08:45 β€” πŸ‘ 11    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Preview
Mapping cells through time and space with moscot - Nature Moscot is an optimal transport approach that overcomes current limitations of similar methods to enable multimodal, scalable and consistent single-cell analyses of datasets across spatial and temporal...

Good to see moscot-tools.org published in @nature.com ! We made existing Optimal Transport (OT) applications in single-cell genomics scalable and multimodal, added a novel spatiotemporal trajectory inference method and found exciting new biology in the pancreas! tinyurl.com/33zuwsep

23.01.2025 08:41 β€” πŸ‘ 49    πŸ” 13    πŸ’¬ 1    πŸ“Œ 3

Today is a great day for optimal transport πŸŽ‰! Lots of gratitude πŸ™ for all folks who contributed to ott-jax.readthedocs.io and pushed for the MOSCOT (now @ nature!) paper, from visionaries @dominik1klein.bsky.social, G. Palla, Z. Piran to the magician, Michal Klein! ❀️

www.nature.com/articles/s41...

22.01.2025 22:17 β€” πŸ‘ 22    πŸ” 7    πŸ’¬ 0    πŸ“Œ 1

This is maybe my favorite thing I've seen out of #NeurIPS2024.

Head over to HuggingFace and play with this thing. It's quite extraordinary.

14.12.2024 19:32 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

ReNO shows that some initial noise are better for some prompts! This is great to improve image generation, but i think it also shows a deeper property of diffusion models.

12.12.2024 11:23 β€” πŸ‘ 2    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Preview
GitHub - ExplainableML/ReNO: [NeurIPS 2024] ReNO: Enhancing One-step Text-to-Image Models through Reward-based Noise Optimization [NeurIPS 2024] ReNO: Enhancing One-step Text-to-Image Models through Reward-based Noise Optimization - ExplainableML/ReNO

This is joint work with @shyamgopal.bsky.social (co-lead), @confusezius.bsky.social, Alexey, and @zeynepakata.bsky.social.

To dive into all the details, please check out:

Code: github.com/ExplainableM...
Paper (updated with latest FLUX-Schnell + ReNO results): arxiv.org/abs/2406.043...

11.12.2024 23:05 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Even within the same computational budget, a ReNO-optimized one-step model outperforms popular multi-step models such as SDXL and PixArt-Ξ±. Additionally, our strongest model, ReNO-enhanced HyperSDXL, is on par even with SOTA proprietary models, achieving a win rate of 54% vs SD3.

11.12.2024 23:05 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

ReNO optimizes the initial noise in one-step T2I models at inference based on human preference reward models. We show that ReNO achieves significant improvements over five different one-step models quantitatively on common benchmarks and using comprehensive user studies.

11.12.2024 23:05 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

Thanks to @fffiloni.bsky.social and @natanielruiz.bsky.social, we have a running live Demo of ReNO, play around with it here:

πŸ€—: huggingface.co/spaces/fffil...

We are excited to present ReNO at #NeurIPS2024 this week!
Join us tomorrow from 11am-2pm at East Exhibit Hall A-C #1504!

11.12.2024 23:05 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1
Post image

Can we enhance the performance of T2I models without any fine-tuning?

We show that with our ReNO, Reward-based Noise Optimization, one-step models consistently surpass the performance of all current open-source Text-to-Image models within the computational budget of 20-50 sec!
#NeurIPS2024

11.12.2024 23:05 β€” πŸ‘ 27    πŸ” 7    πŸ’¬ 1    πŸ“Œ 1
Post image

After a break of over 2 years, I'm attending a conference again! Excited to attend NeurIPS, even more so to be presenting ReNO, getting inference-time scaling and preference optimization to work for text-to-image generation.
Do reach out if you'd like to chat!

09.12.2024 21:27 β€” πŸ‘ 12    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

@lucaeyring is following 20 prominent accounts