Cyril Malbranke's Avatar

Cyril Malbranke

@cyrilmalbranke.bsky.social

Postdoc @ EPFL. Previously @ ENS and Institut Pasteur. Protein design, Protein Language Models.

163 Followers  |  758 Following  |  8 Posts  |  Joined: 07.12.2023  |  1.612

Latest posts by cyrilmalbranke.bsky.social on Bluesky

πŸŽ‰ Excited to share that the last paper of my PhD is now published in PRX Life!

We introduce RAG-ESM, a retrieval-augmented framework that makes pretrained protein language models (like ESM2) homology-aware with minimal training cost.

πŸ“„ Paper: journals.aps.org/prxlife/abst...

21.08.2025 16:13 β€” πŸ‘ 7    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Post image

Protein-protein interactions studied by @cyrilmalbranke.bsky.social #PragueBioML @elixircz.bsky.social

22.08.2025 10:57 β€” πŸ‘ 10    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

[8/8] πŸ’» Resources:
β€’ Training dataset
β€’ 4 pre-trained models (XS β†’ L)
β€’ Code & interactive notebooks
πŸ”— huggingface.co/collections/...
πŸ”— github.com/Bitbol-Lab/P...

21.08.2025 13:55 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

[7/8] πŸ“Š In conclusion, results show strong performances across species and benchmarks for both PPI prediction and gene essentiality. ProteomeLM makes proteome-wide analysis more practical, easing large-scale studies, including in complex eukaryotic proteomes.

21.08.2025 13:55 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Gene essentiality, showing performance outperrforms ESM-C, and that the prediction are good on E.coli, S. cerevisae and minimal cells

Gene essentiality, showing performance outperrforms ESM-C, and that the prediction are good on E.coli, S. cerevisae and minimal cells

[6/8] 🎯 Beyond PPIs: ProteomeLM predicts gene essentiality across diverse taxa (e.g. E. coli, yeast, minimal cells), highlighting its potential for broad downstream applications.

21.08.2025 13:55 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Barplot showing speed improvement over classical DCA methods

Barplot showing speed improvement over classical DCA methods

Number of predictions in function of recall to show performance leap from classical DCA methods to ProteomeLM on human interactome (0.73 -> 0.826 AUROC)

Number of predictions in function of recall to show performance leap from classical DCA methods to ProteomeLM on human interactome (0.73 -> 0.826 AUROC)

Performance on the D-SCRIPT dataset on four organisms for supervised PPI

Performance on the D-SCRIPT dataset on four organisms for supervised PPI

[5/8] ⚑ This allows unsupervised and supervised PPI prediction at proteome scale in minutes, several orders of magnitude faster than coevolution-based methods such as DCA.
Try it here: github.com/Bitbol-Lab/P...

21.08.2025 13:55 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Heatmap plot showing that ProteomeLM attention heads can distinguish interacting vs non interacting pairs in E.coli, S. cerevisiae, H. sapiens

Heatmap plot showing that ProteomeLM attention heads can distinguish interacting vs non interacting pairs in E.coli, S. cerevisiae, H. sapiens

[4/8] 🎯 Key finding: Attention heads spontaneously encode protein–protein interaction networks. Some heads can reach an AUC of 0.92 in discriminating interacting vs non-interacting pairs.

21.08.2025 13:55 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

[3/8] 🧬 Encoding strategy: Instead of positional encoding, ProteomeLM introduces a functional encoding based on orthologous groups. Thus the model can leverage functional encoding and other proteins. This is especially important in eukaryotes, where gene order is less conserved.

21.08.2025 13:55 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Figure 1. ProteomeLM Architecture

Figure 1. ProteomeLM Architecture

[2/8] 🧬 Training objective: ProteomeLM uses a custom masked language modeling task, predicting masked ESM-C representations of proteins within the proteome.

21.08.2025 13:55 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
ProteomeLM: A proteome-scale language model allowing fast prediction of protein-protein interactions and gene essentiality across taxa Language models starting from biological sequence data are advancing many inference problems, both at the scale of single proteins, and at the scale of genomic neighborhoods. In this paper, we introduce ProteomeLM, a transformer-based language model that reasons on entire proteomes from species spanning the tree of life. Leveraging protein language model embeddings, ProteomeLM is trained to reconstruct masked protein embeddings using the whole proteomic context. It thus learns contextualized protein representations reflecting proteome-scale functional constraints. We show that ProteomeLM spontaneously captures protein-protein interactions (PPI) in its attention coefficients. We demonstrate that it screens whole interactomes orders of magnitude faster than amino-acid coevolution-based methods, and substantially outperforms them. We further develop ProteomeLM-PPI, a supervised PPI prediction network that combines ProteomeLM embeddings and attention coefficients, and achieves state-of-the-art performance across species and benchmarks. Finally, we introduce ProteomeLM-Ess, a supervised predictor of gene essentiality that generalizes across diverse taxa. Our results highlight the power of proteome-scale language models for addressing function and interactions at the organism level. ### Competing Interest Statement The authors have declared no competing interest. European Research Council, https://ror.org/0472cxd90, 851173

[1/8] πŸ“„ New preprint! With Gionata Paolo Zalaffi & Anne-Florence Bitbol, we introduce ProteomeLM, a transformer that processes entire proteomes (prokaryotes and eukaryotes), enabling ultra-fast protein–protein interaction (PPI) prediction across the tree of life.
πŸ”— www.biorxiv.org/content/10.1...

21.08.2025 13:55 β€” πŸ‘ 17    πŸ” 3    πŸ’¬ 1    πŸ“Œ 1

@cyrilmalbranke is following 20 prominent accounts