... so definitely more than 4.4 but less than 4.6 million? /s
source?
@sanderhaute.bsky.social
Computational physics/chemistry, and some ML prev: PhD @ Ghent University π§πͺ, AI researcher @ Orbital Materials π¬π§ now: postdoc @ Rotskoff group (Stanford University)
... so definitely more than 4.4 but less than 4.6 million? /s
source?
Just expressing my support for typst as well! Itβs mature enough to typeset a PhD thesis, has a very mellow learning curve, and has a great community.
11.05.2025 22:45 β π 1 π 0 π¬ 0 π 0Orb-v3 out now -- achieves SOTA on speed *and* accuracy
arxiv.org/abs/2504.06231
Ridgeline chart showing the distribution of global daily air temperature differences from the pre-industrial reference period (1850-1900), for every year between 1940 and 2024. Each individual year resembles a hill, shaded in a darker shade of red and further to the right for warmer years. The trend is clearly towards warmer years, with 2024 standing out as first year above 1.5C.
NEW: 2024 has just been confirmed as the warmest year on record, and the first to breach the 1.5C threshold.
We used a ridgeline (Joy Division inspired) chart to visualise daily temperature anomalies since 1940.
2024 clearly stands out with 100% of its days above 1.3C and 75% above 1.5C.
typst (the definitive Latex successor) and manim (for stunning visuals/movies/slideshows) are two incredibly useful pieces of software
typst.app
github.com/3b1b/manim
Anne Gagneux, Ségolène Martin, @quentinbertrand.bsky.social Remi Emonet and I wrote a tutorial blog post on flow matching: dl.heeere.com/conditional-... with lots of illustrations and intuition!
We got this idea after their cool work on improving Plug and Play with FM: arxiv.org/abs/2410.02423
Here I was thinking Iβd have a hard time convincing people RPA is empirical.
Do quantum monte carlo techniques have true potential or are we stuck with decades-old approximations invented by highly noncomputational scientists?
simple Python API; to drive a single 'master' job which then does everything else!
25.11.2024 18:09 β π 1 π 0 π¬ 0 π 0scalable molecular simulation: github.com/molmod/psiflow
scientific: ML potentials, DFT and post-HF calculations, (path-integral) MD, replica exchange, alchemical ΞF , hessians, ...
technical: automated job submission, simple Python, scales to >100 nodes, containerized!
is there a #compchem starter pack here?
25.11.2024 14:23 β π 2 π 0 π¬ 1 π 0a golden (π) PES
Actually, from that perspective, even a 1000x slowdown could be acceptable since it would be used less for super long MDs and more for building models above and beyond atomic-level MD...
From a distance, and this is probably controversial, but it feels like AF has made so much progress that would have otherwise required decades of atomic simulations ?
24.11.2024 15:42 β π 1 π 0 π¬ 1 π 0For drug discovery, do you think more accurate atomic interactions are the way to go, or will people gradually abandon bottom-up atomic-level simulations?
24.11.2024 15:40 β π 0 π 0 π¬ 1 π 0So as long as all possible low-density environments are included in training, putting a limit at fixed cutoff makes sense?
24.11.2024 15:04 β π 1 π 0 π¬ 0 π 0Hmm, yeah, and maybe the fixed neighbors thing is just a trick to speed up training and improve performance on the synthetic benchmarks
At the same time: beyond a βthresholdβ number of neighbors there is maybe so much screening that the required # neighs to include becomes a constant?
I should really get out of my matsci cave because I am so unaware of these things π
I was imagining OpenMM with a bunch of custom force expressions, PME, and anisotropic pressure control (often needed in solid state) at 1 ms / step β and maybe 100 ms / step for MACE for a similar system, approx..
at least in my experience!
24.11.2024 14:41 β π 1 π 0 π¬ 0 π 0Although matbench leading entries usually truncate the number of neighbors to consider to a fixed number, such that the cost of a single message passing layer no longer scales with density β¦
24.11.2024 14:40 β π 1 π 0 π¬ 1 π 0Right sorry, wasnβt counting bio! Though I think itβs more the increased density rather than shear system size that widens the performance gap?
In matsci / catalysis, with proper enhanced sampling, the main worry is not so much the achievable time scales rather than the accuracy of the QM dataβ¦
100x because thatβs how much slower an βoptimizedβ ML potential for any particular system would be. I might be optimistic here but a small MACE network and the right training data have always gotten me below ~1 meV/atom and ~50 meV/A errors.
24.11.2024 11:09 β π 1 π 0 π¬ 1 π 0Epic capture β¦.Grand Canyon National Park in Arizona β¨πβ¨πβ¨
24.11.2024 02:05 β π 71389 π 4843 π¬ 1271 π 347Thrilled to announce Boltz-1, the first open-source and commercially available model to achieve AlphaFold3-level accuracy on biomolecular structure prediction! An exciting collaboration with Jeremy, Saro, and an amazing team at MIT and Genesis Therapeutics. A thread!
17.11.2024 16:20 β π 611 π 204 π¬ 18 π 25link: arxiv.org/abs/2404.03777
08.04.2024 13:00 β π 0 π 0 π¬ 0 π 0new SOTA on collective variable learning!
gist? Train classifier in feature space of pretrained GNN to predict 'phase' of an atomic geometry:
CV(A->B) = logit(B) - logit(A)
+data-efficient
+invariant wrt trans/rot/perm
+compatible w foundation models!