Wei-Tse Hsu's Avatar

Wei-Tse Hsu

@weitse-hsu.bsky.social

- Postdoc in Drug Design at Oxford Biochemistry (Biggin Lab). - Ph.D. from the Shirts Group at CU Boulder. - Keen on compchem, deep learning & education. - Rookie runner. - Originally from Taiwan. - Check my MD tutorials: https://weitsehsu.com/

55 Followers  |  308 Following  |  6 Posts  |  Joined: 11.12.2025  |  1.5807

Latest posts by weitse-hsu.bsky.social on Bluesky

Post image

Now out in JACS! ๐ŸŽ‰ : "Computing Solvation Free Energies of Small Molecules with Experimental Accuracy"! It's been a pleasure to collaborate on this with Harry Moore (@jhmchem.bsky.social) & Gรกbor Csรกnyi pubs.acs.org/doi/10.1021/...

27.01.2026 19:28 โ€” ๐Ÿ‘ 29    ๐Ÿ” 8    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

New Preprint!! We show that binding entropy can be quantitatively predicted from crystallographic ensemble models, accounting for both protein conformational entropy and solvent entropy! www.biorxiv.org/content/10.6...

21.01.2026 20:49 โ€” ๐Ÿ‘ 39    ๐Ÿ” 14    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2
Preview
Can AI-Predicted Complexes Teach Machine Learning to Compute Drug Binding Affinity? We evaluate the feasibility of using co-folding models for synthetic data augmentation in training machine learning-based scoring functions (MLSFs) for binding affinity prediction. Our results show th...

๐Ÿš€ Bottom line:
With careful filtering, co-folding predictions can indeed teach ML about binding affinity.

๐Ÿ‘‰ Read the full JCIM paper: pubs.acs.org/doi/full/10....

Work with Aniket Magarkar
@boehringerglobal.bsky.social and @philbiggin.bsky.social @ox.ac.uk

(6/6)

20.01.2026 19:27 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

๐Ÿ”Ž SI highlights:
- AEV-PLIG beats Boltz-2 in 4 target classes in the FEP benchmark (loses 1, ties 6); both are competitive with FEP+ in some cases.
- ipLDDT & ligand pLDDT are also effective filters; pTM, PAE, PDE are not
- Boltz confidence seems to generalize better than its structure module
(5/6)

20.01.2026 19:27 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

โ“ Are co-folding predictions good enough to train scoring functions?

๐Ÿ‘‰ Yes โ€” with careful filtering. We see no performance difference b/w models trained on:
- experimental structures
- corresponding co-folding predictions

This holds across AEV-PLIG, EHIGN, and RF-Score.
(4/6)

20.01.2026 19:27 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

โ“ When can we trust a co-folding prediction?

๐Ÿ‘‰ From reproducing HiQBind with Boltz-1x, a few simple heuristics are recommended high-quality cofolding augmentation:
1๏ธโƒฃ single-chain systems
2๏ธโƒฃ Boltz confidence > 0.9
3๏ธโƒฃ trainโ€“test similarity > 60%

(3/6)

20.01.2026 19:27 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

โ“ How much can data augmentation actually improve scoring?

๐Ÿ‘‰ Short answer: only if the added data are high-quality. Adding BindingNet v1 clearly improved performance, but v2 did notโ€”despite being 10x largerโ€”due to its substantially lower quality.

Quality beats quantity.
(2/6)

20.01.2026 19:27 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

๐Ÿ“ข Can AI-Predicted Complexes Teach Machine Learning to Compute Drug Binding Affinity?

In our recent JCIM work, we tested whether co-folding models can be used for data augmentation for training ML-based scoring functions (SFs).

We asked 3 simple but critical questions. ๐Ÿ‘‡
(1/6)

20.01.2026 19:27 โ€” ๐Ÿ‘ 6    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

@weitse-hsu is following 20 prominent accounts