Sebastian Dick's Avatar

Sebastian Dick

@semodi.bsky.social

Machine learning researcher and engineer @ D. E. Shaw Research. QM and ML force fields.

839 Followers  |  971 Following  |  12 Posts  |  Joined: 14.10.2023  |  1.5085

Latest posts by semodi.bsky.social on Bluesky

I used to be in favor of single-payer health care in the US. Then I became an inpatient at a NHS hospital for a week.

12.11.2025 06:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I’ve always silently judged chemists for using the term β€œinduction” instead of polarization. Maybe they were playing the long game after all…

04.02.2025 03:38 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Alchemical Free-Energy Calculations at Quantum-Chemical Precision In the past decade, machine-learned potentials (MLP) have demonstrated the capability to predict various QM properties learned from a set of reference QM calculations. Accordingly, hybrid QM/MM simulations can be accelerated by replacement of expensive QM calculations with efficient MLP energy predictions. At the same time, alchemical free-energy perturbations (FEP) remain unachievable at the QM level of theory. In this work, we extend the capabilities of the Buffer Region Neural Network (BuRNN) QM/MM scheme toward FEP. BuRNN introduces a buffer region that experiences full electronic polarization by the QM region to minimize artifacts at the QM/MM interface. An MLP is used to predict the energies for the QM region and its interactions with the buffer region. Furthermore, BuRNN allows us to implement FEP directly into the MLP Hamiltonian. Here, we describe the alchemical change from methanol to methane in water at the MLP/MM level as a proof of concept.

The first first-author paper of Radek Crha in our group is out! We conduct alchemical free-energy calculations directly at the level of a machine-learned potential. Using our BuRNN scheme for NN/MM calculations opens the way for free energies at QM precision!

#compchem
doi.org/10.1021/acs....

20.01.2025 17:49 β€” πŸ‘ 23    πŸ” 2    πŸ’¬ 0    πŸ“Œ 1

Aja - Steely Dan

25.11.2024 01:35 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I knew that Bluesky was gonna be here to stay when I realized that this account has made it over from X.

20.11.2024 02:05 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

(7/N) That being said, I've been seeing a lot of efforts lately that try to add physically inspired long-range models to MP/Attention-based ML potentials and I'm exited about what's to come.

18.11.2024 15:44 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

(6/N) Ergo, an ML model with finite cutoff cannot, by design, be accurate for both condensed-phase like systems and still have a physical MBE (and be reliable for dimers, trimers etc.), and hence is not really "universal".

18.11.2024 15:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

(5/N) Focusing on 2 and 3-body energies, a short-range ML model will immediately break down (i.e. predict zero) for dimers/trimers separated by more than the model's cutoff radius, and hence the MBE of the model becomes non-physical.

18.11.2024 15:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

(4/N) predict the interaction energy. This may be true, but in comes our trusty many-body expansion of the energy. Often used in QM calculations MBE states that the total (interaction) energy can be written as a sum of 1-body, 2-body, 3-body, etc. terms.

18.11.2024 15:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

(3/N) We want to compute the interaction energy between this molecule and it's surrounding water. It is clear that long-range effects are important, especially for ionic solutes. Oftentimes the argument is made that screening takes care of this, and a short-ranged ML model can still accurately...

18.11.2024 15:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

(2/N) I'm not gonna go into detail about I think why this particular dimer binding curve looks the way it does, as a lot of good arguments have been made in the thread(s). Thought experiment: Consider a molecule in a solvent (say, water).

18.11.2024 15:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

(1/N) Because I've seen this pop up on my feed a lot lately I want to add my own bit re "universal" machine learning force fields. I want to make the argument that any ML potential with range-limited message passing (or attention) cannot be universal.

18.11.2024 15:44 β€” πŸ‘ 15    πŸ” 2    πŸ’¬ 1    πŸ“Œ 1

Hi, can I be added please? Working on ML applied to compchem, in particular QM methods and MD simulations.

16.11.2024 19:37 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@semodi is following 20 prominent accounts