Charlie Pugh's Avatar

Charlie Pugh

@cwjpugh.bsky.social

PhD candidate - Machine Learning and Genomics @CRG.eu with @jonnyfrazer.bsky.social and @MafaldaFigDias

249 Followers  |  1,164 Following  |  10 Posts  |  Joined: 27.11.2024  |  1.8424

Latest posts by cwjpugh.bsky.social on Bluesky

We also made some improvements with genomic language model, Evo 2, but in this case the interpretation was less clear. See the preprint for more details. Code for using LFB will made available shortly. 10/10

26.05.2025 17:30 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This provides evidence that better fitness estimation can be achieved at negligible computational cost by bridging the gap between likelihood and fitness at inference time. 9/n

26.05.2025 17:30 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
We show a scatterplot of ROC-AUCs for each gene, calculated separating benign and pathogenic labelled variants with either usual or LFB fitness estimation

We show a scatterplot of ROC-AUCs for each gene, calculated separating benign and pathogenic labelled variants with either usual or LFB fitness estimation

This trend held across DMS assay types and mutational depth, and also on prediction of clinical variants. 8/n

26.05.2025 17:30 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
We show a plot of Model Size vs Mean Spearman Correlation across the DMS datasets from ProteinGym for ESM-2 and ProGen2 model families both with and without the LFB estimation.

We show a plot of Model Size vs Mean Spearman Correlation across the DMS datasets from ProteinGym for ESM-2 and ProGen2 model families both with and without the LFB estimation.

On ProteinGym, LFB provided significant improvements across model classes and sizes and we saw that larger better fit models provided better predictions in general.
proteingym.org 7/n

26.05.2025 17:30 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1

We found under an Ornstein–Uhlenbeck model of evolution that our LFB should be lower variance than the standard estimate by marginalising the effect of drift. 6/n

26.05.2025 17:30 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
We show a schematic of the LFB estimate where by averaging over predictions for a variant applied to other related sequences, we produce an score which should be closer to the true change in fitness.

We show a schematic of the LFB estimate where by averaging over predictions for a variant applied to other related sequences, we produce an score which should be closer to the true change in fitness.

We tried a simple strategy β€” averaging predictions over sequences under similar selective pressures to effectively reduce the impact of unwanted non-fitness related correlations β€” likelihood fitness bridging (LFB). 5/n

26.05.2025 17:30 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We wondered whether we might be able to improve predictions from existing models without any further training. 4/n

26.05.2025 17:30 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Non-identifiability and the Blessings of Misspecification in Models... Misspecification is a blessing, not a curse, when estimating protein fitness from evolutionary sequence data using generative models.

Weinstein et al show that better fit sequence models can perform worse at fitness estimation due to phylogenetic structure:
openreview.net/forum?id=CwG...
And in practice we are seeing that pLMs don’t improve with lower perplexities:
openreview.net/forum?id=UvP... www.biorxiv.org/content/10.1... 3/n

26.05.2025 17:30 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Have We Hit the Scaling Wall for Protein Language Models? Beyond Scaling: What Truly Works in Protein Fitness Prediction

Protein language models are showing promise in variant effect prediction - but there’s emerging evidence likelihood based zero shot fitness estimation is breaking down. See this excellent summary from @pascalnotin.bsky.social: pascalnotin.substack.com/p/have-we-hi... 2/n

26.05.2025 17:30 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
From Likelihood to Fitness: Improving Variant Effect Prediction in Protein and Genome Language Models Generative models trained on natural sequences are increasingly used to predict the effects of genetic variation, enabling progress in therapeutic design, disease risk prediction, and synthetic biolog...

New preprint in collaboration with @paulinanunezv.bsky.social supervised by @jonnyfrazer.bsky.social and Mafalda Dias – we propose a simple approach to improving zero-shot variant effect prediction in pre-existing protein and genome language models: 🧢 1/n

www.biorxiv.org/content/10.1...

26.05.2025 17:30 β€” πŸ‘ 73    πŸ” 24    πŸ’¬ 1    πŸ“Œ 4
Post image

@cwjpugh.bsky.social at #VariantEffect25

22.05.2025 10:29 β€” πŸ‘ 19    πŸ” 8    πŸ’¬ 0    πŸ“Œ 0

Three BioML starter packs now!

Pack 1: go.bsky.app/2VWBcCd
Pack 2: go.bsky.app/Bw84Hmc
Pack 3: go.bsky.app/NAKYUok

DM if you want to be included (or nominate people who should be!)

03.12.2024 03:27 β€” πŸ‘ 147    πŸ” 60    πŸ’¬ 16    πŸ“Œ 6
Post image

Thanks Charlie for opening the PhD Symposium! Many thanks to everyone involved in its organisation. #CRGPhDSymp2024

28.11.2024 09:10 β€” πŸ‘ 7    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0

@cwjpugh is following 20 prominent accounts