The gifs didn't post properly π
Here is one showing the electron cloud in two stages: (1) the learning of electron density during training and (2) the predicted ground-state across conformations π
@majhas.bsky.social
PhD Student at Mila & University of Montreal | Generative modeling, sampling, molecules majhas.github.io
The gifs didn't post properly π
Here is one showing the electron cloud in two stages: (1) the learning of electron density during training and (2) the predicted ground-state across conformations π
(9/9)β‘ Runtime efficiency
Self-refining training reduces total runtime up to 4 times compared to the baseline
and up to 2 times compared to the fully-supervised approach!!!
Less need for large pre-generated datasets β training and sampling happen in parallel.
(8/n) π§ͺ Robust generalization
We simulate molecular dynamics using each modelβs energy predictions and evaluate accuracy along the trajectory.
Models trained with self-refinement stay accurate even far from the training distribution β while baselines quickly degrade.
(7/n) π Performance under data scarcity
Our method achieves low energy error with as few as 25 conformations.
With 10Γ less data, it matches or outperforms fully supervised baselines.
This is especially important in settings where labeled data is expensive or unavailable.
(6/n) This minimization leads to Self-Refining Training:
π Use the current model to sample conformations via MCMC
π Use those conformations to minimize energy and update the model
Everything runs asynchronously, without need for labeled data and minimal number of conformations from a dataset!
(5/n) To get around this, we introduce a variational upper bound on the KL between any sampling distribution q(R) and the target Boltzmann distribution.
Jointly minimizing this bound wrt ΞΈ and q yields
β
A model that predicts the ground-state solutions
β
Samples that match the ground true density
(4/n) With an amortized DFT model f_ΞΈ(R), we define the density of molecular conformations as the
Boltzmann distribution
This isn't a typical ML setup because
β No samples from the density - canβt train a generative model
β No density - canβt sample via Monte Carlo!
(3/n) DFT offers a scalable solution to the SchrΓΆdinger equation but must be solved independently for each geometry by minimizing energy wrt coefficients C for a fixed basis.
This presents a bottleneck for MD/sampling.
We want to amortize this - train a model that generalizes across geometries R.
(2/n) This work is the result of an amazing collaboration with @fntwin.bsky.social Hatem Helal @dom-beaini.bsky.social @k-neklyudov.bsky.social
10.06.2025 19:49 β π 1 π 0 π¬ 1 π 0(1/n)π¨Train a model solving DFT for any geometry with almost no training data
Introducing Self-Refining Training for Amortized DFT: a variational method that predicts ground-state solutions across geometries and generates its own training data!
π arxiv.org/abs/2506.01225
π» github.com/majhas/self-...
New preprint! π§ π€
How do we build neural decoders that are:
β‘οΈ fast enough for real-time use
π― accurate across diverse tasks
π generalizable to new sessions, subjects, and even species?
We present POSSM, a hybrid SSM architecture that optimizes for all three of these axes!
π§΅1/7
π§΅(1/7) Have you ever wanted to combine different pre-trained diffusion models but don't have time or data to retrain a new, bigger model?
π Introducing SuperDiff π¦ΉββοΈ β a principled method for efficiently combining multiple pre-trained diffusion models solely during inference!
π Super excited to announce the first ever Frontiers of Probabilistic Inference: Learning meets Sampling workshop at #ICLR2025 @iclr-conf.bsky.social!
π website: sites.google.com/view/fpiwork...
π₯ Call for papers: sites.google.com/view/fpiwork...
more details in thread belowπ π§΅
Now you can generate equilibrium conformations for your small molecule in 3 lines of code with ET-Flow! Awesome effort put in by @fntwin.bsky.social!
12.12.2024 16:37 β π 13 π 3 π¬ 0 π 1ET-Flow shows, once again, that equivariance is better than Transformer when physical precision matters!
come see us at @neuripsconf.bsky.social !!
Excited to share our work! I had a wonderful time collaborating with these brilliant people
07.12.2024 16:01 β π 4 π 0 π¬ 0 π 0