Happy to introduce ๐ฅLaM-SLidE๐ฅ!
We show how trajectories of spatial dynamical systems can be modeled in latent space by
--> leveraging IDENTIFIERS.
๐Paper: arxiv.org/abs/2502.12128
๐ปCode: github.com/ml-jku/LaM-S...
๐Blog: ml-jku.github.io/LaM-SLidE/
1/n
22.05.2025 12:24 โ ๐ 7 ๐ 8 ๐ฌ 1 ๐ 1
11/11 This is joint work with @willberghammer, @haoyu_wang66, @EnnemoserMartin, @HochreiterSepp, and @sebaleh. See you at #ICLR!
[Poster Link](iclr.cc/virtual/202...)
[Paper Link](arxiv.org/abs/2502.08696)
---
24.04.2025 08:57 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
10/11 ๐ Our method outperforms autoregressive approaches on Ising model benchmarks and opens new avenues for applying diffusion models to a wide range of scientific applications in discrete domains.
24.04.2025 08:57 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
9/11 ๐ Due to the mass-covering property of the fKL, it excels at unbiased sampling. Conversely, the rKL is mode-seeking, making it ideal for combinatorial optimization (CO) as it achieves better solution quality with fewer samples.
24.04.2025 08:57 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
8/11 ๐ก ๐๐จ๐ฅ๐ฎ๐ญ๐ข๐จ๐ง 2: We address the limitations of the fKL by combining it with Neural Importance Sampling over samples from the diffusion sampler. This allows us to estimate the gradient of the fKL using Monte Carlo integration, making training more memory-efficient.
24.04.2025 08:57 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0
7/11 An alternative is the forward KL divergence (fKL), where it is well known how to increase memory efficiency by leveraging Monte Carlo integration over diffusion time steps. However, the fKL divergence requires samples from the target distribution!
24.04.2025 08:57 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
6/11 ๐ก ๐๐จ๐ฅ๐ฎ๐ญ๐ข๐จ๐ง 1: We apply the policy gradient theorem to the rKL between joint distributions of the diffusion path. This enables the use of mini-batches over diffusion time steps by leveraging reinforcement learning methods, allowing for memory-efficient training.
24.04.2025 08:57 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
5/11 A commonly used divergence is the reverse KL divergence (rKL), as the expectation of the divergence goes over samples from the generative model. However, naive optimization of this KL divergence requires backpropagating through the whole generative process.
24.04.2025 08:57 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
4/11 ๐จ ๐๐ก๐๐ฅ๐ฅ๐๐ง๐ ๐: However, existing diffusion samplers struggle with memory scaling, limiting the number of attainable diffusion steps due to backpropagation through the entire generative process.
24.04.2025 08:57 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
3/11 ๐ ๐๐ข๐๐๐ฎ๐ฌ๐ข๐จ๐ง ๐๐๐ฆ๐ฉ๐ฅ๐๐ซ๐ฌ aim to sample from an unnormalized target distribution without access to samples from this distribution. They can be trained by minimizing a divergence between the joint distribution of the forward and reverse diffusion paths.
24.04.2025 08:57 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
2/11 We've developed scalable and memory-efficient training methods for diffusion samplers, achieving state-of-the-art results in combinatorial optimization and unbiased sampling on the Ising model.
24.04.2025 08:57 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
1/11 Excited to present our latest work "Scalable Discrete Diffusion Samplers: Combinatorial Optimization and Statistical Physics" at #ICLR2025 on Fri 25 Apr at 10 am!
#CombinatorialOptimization #StatisticalPhysics #DiffusionModels
24.04.2025 08:57 โ ๐ 16 ๐ 7 ๐ฌ 1 ๐ 0
Hochschulen und Forschungsinstitutionen verlassen Plattform X - Gemeinsam fรผr Vielfalt, Freiheit und Wissenschaft
Starkes Signal!!
รber 60 dt. Hochschulen & Forschungsinstitutionen haben heute ihren Ausstieg bei X bekanntgegeben, s.u. #eXit
X sei nicht mehr vereinbar mit ihren Grundwerten: โWeltoffenheit, wissenschaftliche Integritรคt, Transparenz und demokratischer Diskurs.โ
Liste der Beteiligten hier:
10.01.2025 09:21 โ ๐ 3398 ๐ 832 ๐ฌ 90 ๐ 113
ML for molecules and materials in the era of LLMs [ML4Molecules]
ELLIS workshop, HYBRID, December 6, 2024
The Machine Learning for Molecules workshop 2024 will take place THIS FRIDAY, December 6.
Tickets for in-person participation are "SOLD" OUT.
We still have a few free tickets for online/virtual participation!
Registration link here: moleculediscovery.github.io/workshop2024/
03.12.2024 12:35 โ ๐ 19 ๐ 14 ๐ฌ 0 ๐ 0
A Pizza Steel or Pizza Stone with max Heat (250 celsius) should do the Job
30.11.2024 08:57 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
I think it is fine to keep the score, but if all concerns are addressed they should at least justify why they are nevertheless keeping their score.
26.11.2024 13:11 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
Does this mean all Paper at 6 or above should be accepted?
25.11.2024 19:02 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
โ๏ธ Reminder to reviewers: Check author responses to your reviews, and ask follow up questions if needed.
50% of papers have discussion - letโs bring this number up!
25.11.2024 12:45 โ ๐ 38 ๐ 8 ๐ฌ 1 ๐ 3
That is a cool idea!
23.11.2024 15:54 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
The โจML Internship Feedโจ is here!
@serge.belongie.com and I created this feed to compile internship opportunities in AI, ML, CV, NLP, and related areas.
The feed is rule-based. Please help us improve the rules by sharing feedback ๐งก
๐ Link to the feed: bsky.app/profile/did:...
22.11.2024 21:46 โ ๐ 63 ๐ 16 ๐ฌ 7 ๐ 1
Atlas - Engagement-Based Social Graph for Bluesky by Jaz (jaz.bsky.social)
Love seeing the Bluesky community grow!
Just look at the statsโdaily activity (likes, posts, and follows) is skyrocketing ๐, with recent peaks such as hitting 3 million daily likes!
Want to explore more about Blueskyโs incredible growth? Check out the live stats page here: bsky.jazco.dev/stats
19.11.2024 20:02 โ ๐ 5 ๐ 1 ๐ฌ 0 ๐ 0
Max Welling (@wellingmax.bsky.social) landed and needs followers! ;)
18.11.2024 08:10 โ ๐ 32 ๐ 4 ๐ฌ 3 ๐ 0
โ
17.11.2024 20:49 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
I also would like to join :)
16.11.2024 15:42 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0
I'm making a list of AI for Science researchers on bluesky โ let me know if I missed you / if you'd like to join!
go.bsky.app/AcP9Lix
10.11.2024 00:11 โ ๐ 246 ๐ 90 ๐ฌ 160 ๐ 5
Aktuelle #Wahlumfragen mit Auswertung, Verlauf und Chronik | #Sonntagsfrage | #Wahltrend zur #Bundestagswahl und zu Landtagswahlen โค https://dawum.de
Ass. Prof. Machine/Deep Learning in Medical Imaging | JKU Linz
PhD student @ Institute for Machine Learning @jkulinz
Research scientist at Anthropic. Prev. Google Brain/DeepMind, founding team OpenAI. Computer scientist; inventor of the VAE, Adam optimizer, and other methods. ML PhD. Website: dpkingma.com
Theoretical Physicist at @MPI_ScienceOfLight. This is the account of my research group. Topics include #machinelearning for (#quantum) #physics [โฆ]
๐ bridged from โ https://fediscience.org/@FMarquardtGroup, follow @ap.brid.gy to interact
Tenured senior researcher in quantum information at the Freie Universitรคt Berlin (previously: PhD at ETH Zurich & post-doc at IQIM Caltech) https://phfaist.com/
Quantum compilation & open source software.
working on pennylane.ai
I also like to explain things to a technical but non-domain-expert audience
Junior Research Group Leader at Universitรคt Jena
I like exploring the unknown ๐ญ๐ฌ๐คโจ
AI for Science, Quantum Optics (Theory & Experiment)
Quantum information | AI for science | computational physics.
PhD @ Perimeter Institute & IQC.
Trying to build quantum things.
Quantum theory. Account of my research group at the University of Augsburg:
https://www.uni-augsburg.de/en/fakultaet/mntf/physik/groups/theo3/
quantum algorithmics @UniAugsburg
https://www.uni-augsburg.de/de/fakultaet/fai/informatik/prof/qalg/
Posts are not always my own
Theoretical physicist at LMU Mรผnchen
Google Quantum AI | Ex-Team Lead, IBM Quantum | MIT TR35 | Founder, Open Labs | Board, Yale Alumni Assoc | Yale PhD
Scientist, professor of quantum physics at Freie Universitรคt Berlin and affiliated with Helmholtz Center Berlin and the Fraunhofer Heinrich Hertz Institute. ERC Fellow.
ETH Zurich ๐จ๐ฆ ๐จ๐ด
Working towards the safe development of AI for the benefit of all at Universitรฉ de Montrรฉal, LawZero and Mila.
A.M. Turing Award Recipient and most-cited AI researcher.
https://lawzero.org/en
https://yoshuabengio.org/profile/
Professor for "Machine Learning in Science", University of Tรผbingen.
Artificial Intellgence as a source of inspiration in Science.
https://mariokrenn.wordpress.com/
Deep learner at FAIR. Into codegen, RL, equivariance, generative models. Spent time at Qualcomm, Scyfer (acquired), UvA, Deepmind, OpenAI.