3/
Noise Hypernetworks: Amortizing Test-Time Compute in Diffusion Models
@lucaeyring.bsky.social , @shyamgopal.bsky.social , Alexey Dosovitskiy, @natanielruiz.bsky.social , @zeynepakata.bsky.social
[Paper]: arxiv.org/abs/2508.09968
[Code]: github.com/ExplainableM...
13.10.2025 14:43 β π 2 π 1 π¬ 2 π 0
πPhD Spotlight: Karsten Roth
Celebrate @confusezius.bsky.social , who defended his PhD on June 24th summa cum laude!
π His next stop: Google DeepMind in Zurich!
Join us in celebrating Karsten's achievements and wishing him the best for his future endeavors! π₯³
04.08.2025 14:11 β π 9 π 2 π¬ 1 π 1
From cell lines to full embryos, drug treatments to genetic perturbations, neuron engineering to virtual organoid screens β odds are thereβs something in it for you!
Built on flow matching, CellFlow can help guide your next phenotypic screen: biorxiv.org/content/10.1101/2025.04.11.648220v1
23.04.2025 09:26 β π 17 π 7 π¬ 1 π 1
(4/4) Disentangled Representation Learning with the Gromov-Monge Gap
@lucaeyring.bsky.social will present GMG, a novel regularizer that matches prior distributions with minimal geometric distortion.
π Hall 3 + Hall 2B #603
π Sat Apr 26, 10:00 a.m.β12:30β―p.m.
22.04.2025 13:52 β π 4 π 1 π¬ 0 π 0
Disentangled Representation Learning with the Gromov-Monge Gap
Learning disentangled representations from unlabelled data is a fundamental challenge in machine learning. Solving it may unlock other problems, such as generalization, interpretability, or fairness. ...
(3/4) Disentangled Representation Learning with the Gromov-Monge Gap
A fantastic work contributed by Theo Uscidda and @lucaeyring.bsky.social , with @confusezius.bsky.social , @fabiantheis.bsky.social , @zeynepakata.bsky.social , and Marco Cuturi.
π [Paper]: arxiv.org/abs/2407.07829
07.04.2025 09:34 β π 5 π 1 π¬ 1 π 0
Happy to share that we have 4 papers to be presented in the coming #ICLR2025 in the beautiful city of #Singapore . Check out our website for more details: eml-munich.de/publications. We will introduce the talented authors with their papers very soon, stay tunedπ
19.03.2025 11:54 β π 7 π 4 π¬ 0 π 0
Thrilled to announce that four papers from our group have been accepted to #CVPR2025 in Nashville! π Congrats to all authors & collaborators.
Our work spans multimodal pre-training, model merging, and more.
π Papers & codes: eml-munich.de#publications
See threads for highlights in each paper.
#CVPR
02.04.2025 11:36 β π 11 π 4 π¬ 1 π 0
π Disentangled Representation Learning with the Gromov-Monge Gap
with ThΓ©o Uscidda, Luca Eyring, @confusezius.bsky.social, Fabian J Theis, Marco Cuturi
π Decoupling Angles and Strength in Low-rank Adaptation
with Massimo Bini, Leander Girrbach
24.01.2025 20:02 β π 10 π 2 π¬ 0 π 0
Missing the deep learning part? go check out the follow up work @neuripsconf.bsky.social (tinyurl.com/yvf72kzf) and @iclr-conf.bsky.social (tinyurl.com/4vh8vuzk)
23.01.2025 08:45 β π 11 π 3 π¬ 0 π 0
Mapping cells through time and space with moscot - Nature
Moscot is an optimal transport approach that overcomes current limitations of similar methods to enable multimodal, scalable and consistent single-cell analyses of datasets across spatial and temporal...
Good to see moscot-tools.org published in @nature.com ! We made existing Optimal Transport (OT) applications in single-cell genomics scalable and multimodal, added a novel spatiotemporal trajectory inference method and found exciting new biology in the pancreas! tinyurl.com/33zuwsep
23.01.2025 08:41 β π 49 π 13 π¬ 1 π 3
Today is a great day for optimal transport π! Lots of gratitude π for all folks who contributed to ott-jax.readthedocs.io and pushed for the MOSCOT (now @ nature!) paper, from visionaries @dominik1klein.bsky.social, G. Palla, Z. Piran to the magician, Michal Klein! β€οΈ
www.nature.com/articles/s41...
22.01.2025 22:17 β π 22 π 7 π¬ 0 π 1
This is maybe my favorite thing I've seen out of #NeurIPS2024.
Head over to HuggingFace and play with this thing. It's quite extraordinary.
14.12.2024 19:32 β π 3 π 2 π¬ 0 π 0
ReNO shows that some initial noise are better for some prompts! This is great to improve image generation, but i think it also shows a deeper property of diffusion models.
12.12.2024 11:23 β π 2 π 2 π¬ 1 π 0
GitHub - ExplainableML/ReNO: [NeurIPS 2024] ReNO: Enhancing One-step Text-to-Image Models through Reward-based Noise Optimization
[NeurIPS 2024] ReNO: Enhancing One-step Text-to-Image Models through Reward-based Noise Optimization - ExplainableML/ReNO
This is joint work with @shyamgopal.bsky.social (co-lead), @confusezius.bsky.social, Alexey, and @zeynepakata.bsky.social.
To dive into all the details, please check out:
Code: github.com/ExplainableM...
Paper (updated with latest FLUX-Schnell + ReNO results): arxiv.org/abs/2406.043...
11.12.2024 23:05 β π 5 π 0 π¬ 0 π 0
Even within the same computational budget, a ReNO-optimized one-step model outperforms popular multi-step models such as SDXL and PixArt-Ξ±. Additionally, our strongest model, ReNO-enhanced HyperSDXL, is on par even with SOTA proprietary models, achieving a win rate of 54% vs SD3.
11.12.2024 23:05 β π 3 π 0 π¬ 1 π 0
ReNO optimizes the initial noise in one-step T2I models at inference based on human preference reward models. We show that ReNO achieves significant improvements over five different one-step models quantitatively on common benchmarks and using comprehensive user studies.
11.12.2024 23:05 β π 3 π 0 π¬ 1 π 0
Thanks to @fffiloni.bsky.social and @natanielruiz.bsky.social, we have a running live Demo of ReNO, play around with it here:
π€: huggingface.co/spaces/fffil...
We are excited to present ReNO at #NeurIPS2024 this week!
Join us tomorrow from 11am-2pm at East Exhibit Hall A-C #1504!
11.12.2024 23:05 β π 4 π 0 π¬ 1 π 1
Can we enhance the performance of T2I models without any fine-tuning?
We show that with our ReNO, Reward-based Noise Optimization, one-step models consistently surpass the performance of all current open-source Text-to-Image models within the computational budget of 20-50 sec!
#NeurIPS2024
11.12.2024 23:05 β π 27 π 7 π¬ 1 π 1
After a break of over 2 years, I'm attending a conference again! Excited to attend NeurIPS, even more so to be presenting ReNO, getting inference-time scaling and preference optimization to work for text-to-image generation.
Do reach out if you'd like to chat!
09.12.2024 21:27 β π 12 π 3 π¬ 0 π 0
Principal Research Scientist at NVIDIA | Former Physicist | Deep Generative Learning | https://karstenkreis.github.io/
Opinions are my own.
PhD student at MIT CSAIL; previously at MPI-IS TΓΌbingen, Mila, NVIDIA HK, CUHK MMLab. Interested in robustness: https://chinglamchoi.github.io/cchoi/.
I add noise to images and make neural networks remove them!
PhD Student in Computational Psychiatry with @ericschulz.bsky.social at Helmholtz Munich exploring how anxious and depressed people explore. Also doing some work with LLMs these days (she/her) kristinwitte.github.io
Doctoral Researcher specializing in "Machine Learning and Computational Biology" at TUM & Helmholtz, part of Fabian Theis research group.
kemalinecik.com
PhD student at TUM and Helmholtz Munich
ELLIS PhD Fellow @belongielab.org | @aicentre.dk | University of Copenhagen | @amsterdamnlp.bsky.social | @ellis.eu
Multi-modal ML | Alignment | Culture | Evaluations & Safety| AI & Society
Web: https://www.srishti.dev/
Intern at Google Deepmind Toronto | PhD student in ML at Max Planck Institute TΓΌbingen and University of TΓΌbingen.
PhD student in Machine Learning @Warsaw University of Technology and @IDEAS NCBR
PhD student at the University of TΓΌbingen, member of @bethgelab.bsky.social
Physicist working in climate modelling, interested in dynamical effects, turbulence, gravity waves, AI, quantum computing - views are my own, this is my private account. she/her #academicsky http://www.mierk.de
PhD student at TU Berlin, working on generative models and inverse problems
he/him
machine learning researcher @ Apple machine learning research
Neuroscience, Machine Learning and Classical Music. Interested in understanding how we move our eyes to behave in the world. Postdoc @bethgelab.bsky.social. Founding Member of grammofy
We are a joint partnership of University of TΓΌbingen and Max Planck Institute for Intelligent Systems. We aim at developing robust learning systems and societally responsible AI. https://tuebingen.ai/imprint
https://tuebingen.ai/privacy-policy#c1104
Machine Learning Researcher Apple
PhD MPI-IS, University of TΓΌbingen
MSc Edinburgh University
Ex. Google DeepMind, Google Research, Helmholtz Munich, TU Munich
Multimodal learning, efficient learning, and video understanding.
merceaotniel.github.io
PhD student working on understanding why neural nets generalize @MPI TΓΌbingen | ex-Vector | path2phd.substack.com | ππΊ πͺπΊ