Yo Akiyama's Avatar

Yo Akiyama

@yoakiyama.bsky.social

MIT EECS PhD student in solab.org Building ML methods to understand and engineer biology

727 Followers  |  481 Following  |  14 Posts  |  Joined: 07.11.2023  |  2.2115

Latest posts by yoakiyama.bsky.social on Bluesky

Preview
GPU-accelerated homology search with MMseqs2 - Nature Methods Graphics processing unit-accelerated MMseqs2 offers tremendous speedups for homology retrieval from metagenomic databases, query-centered multiple sequence alignment generation for structure predictio...

MMseqs2-GPU sets new standards in single query search speed, allows near instant search of big databases, scales to multiple GPUs and is fast beyond VRAM. It enables ColabFold MSA generation in seconds and sub-second Foldseek search against AFDB50. 1/n
πŸ“„ www.nature.com/articles/s41...
πŸ’Ώ mmseqs.com

21.09.2025 08:06 β€” πŸ‘ 174    πŸ” 64    πŸ’¬ 4    πŸ“Œ 2

Sorry for the slow responses lots of traveling this week. We use a paired MSA for the toxin-antitoxin proteins (many rows from different species). The top row is the mutated antitoxin sequence + fixed toxin seq, and we compute the pseudolikelihood over the 4 mutated positions by masking each

10.08.2025 21:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

MMseqs2 v18 is out
- SIMD FW/BW alignment (preprint soon!)
- Sub. Mat. Ξ» calculator by Eric Dawson
- Faster ARM SW by Alexander Nesterovskiy
- MSA-Pairformer’s proximity-based pairing for multimer prediction (www.biorxiv.org/content/10.1...; avail. in ColabFold API)
πŸ’Ύ github.com/soedinglab/M... & 🐍

05.08.2025 08:25 β€” πŸ‘ 62    πŸ” 17    πŸ’¬ 0    πŸ“Œ 0
Post image

Side story: While working on the Google Colab notebook for MSA pairformer. We encountered a problem: The MMseqs2 ColabFold MSA did not show any contacts at protein interfaces, while our old HHblits alignments showed clear contacts πŸ«₯... (2/4)

05.08.2025 07:39 β€” πŸ‘ 13    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Preview
GitHub - yoakiyama/MSA_Pairformer Contribute to yoakiyama/MSA_Pairformer development by creating an account on GitHub.

Our code and Google Colab notebook can be found here
github.com/yoakiyama/MS...
colab.research.google.com/github/yoaki...
Please reach out with any comments, questions or concerns! We really appreciate all of the feedback from the community and are excited to see how y'all will use MSA Pairformer :)

05.08.2025 06:29 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Special thanks to all members of our team! Their mentorship and support are truly world-class.

And a huge shoutout to the entire solab! I'm so grateful to work with these brilliant and supportive scientists every day. Keep an eye out for exciting work coming out from the team!

05.08.2025 06:29 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Thanks for tuning in--we've already received incredibly valuable feedback from the community and will continue to update our work!

We're excited for all of MSA Pairformer's potential applications for biological discovery and for the future of memory and parameter efficient pLMs

05.08.2025 06:29 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

We made some updates to MSA pairing with MMseqs2 for modeling protein-protein interactions! Mispairing sequences leads to contamination of non-interacting paralogs. We use genomic proximity to improve pairing, and find that MSA Pairformer's predictions reflect pairing quality

05.08.2025 06:29 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
Post image

We also looked into how perturbing MSAs effects contact prediction. Interestingly, unlike MSA Transformer, MSA Pairformer doesn't hallucinate contacts after ablating covariance from the MSA. Hints at fundamental differences in how they extract pairwise relationships

05.08.2025 06:29 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We ablate triangle updates and replace it with a pair updates analog. As expected, contact precision deteriorates, and the false positives are enriched in indirect correlations. These results suggest the role of triangle updates in disentangling direct and indirect correlations

05.08.2025 06:29 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Whereas the ESM2 family models show an interesting trade-off between contact precision and zero-shot variant effect prediction, MSA Pairformer performs strongly in both
P.S. this figure slightly differs from what's in the preprint and will be updated in v2 of the paper!

05.08.2025 06:29 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Using a library of mutants at four key ParD3-ParE3 toxin-antitoxin interface residues from Aakre et al. (2015), we find that MSA Pairformer's pseudolikelihood scores better discriminate binders and non-binders, directly related to its ability to model the interaction

05.08.2025 06:29 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
Post image

Beyond monomeric structures, accurate prediction of protein-protein interactions is crucial for understanding protein function. MSA Pairformer substantially outperforms all other methods in predicting residue-residue interactions at hetero-oligomeric interfaces

05.08.2025 06:29 β€” πŸ‘ 14    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0
Post image

On unsupervised long-range contact prediction, it outperforms MSA Transformer and all ESM2 family models, suggesting that its representations more accurately capture structural signals from evolutionary context

05.08.2025 06:29 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We introduce MSA Pairformer, a 111M parameter memory-efficient MSA-based protein language model that builds on AlphaFold3's MSA module to extract evolutionary signals most relevant to the query sequence via a query-biased outer product

05.08.2025 06:29 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Current efforts to improve self-supervised protein language modeling focus on scaling model and training data size, requiring vast resources and limiting accessibility. Can we
1) scale down protein language modeling?
2) expand its scope?

05.08.2025 06:29 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Scaling down protein language modeling with MSA Pairformer Recent efforts in protein language modeling have focused on scaling single-sequence models and their training data, requiring vast compute resources that limit accessibility. Although models that use ...

Excited to share work with
Zhidian Zhang, @milot.bsky.social, @martinsteinegger.bsky.social, and @sokrypton.org
biorxiv.org/content/10.1...
TLDR: We introduce MSA Pairformer, a 111M parameter protein language model that challenges the scaling paradigm in self-supervised protein language modeling🧡

05.08.2025 06:29 β€” πŸ‘ 95    πŸ” 43    πŸ’¬ 1    πŸ“Œ 1

@yoakiyama is following 20 prominent accounts