We have started a project trying to predic the interactions/structures of all yeast protein pairs using an AlphaFold pooling approach. We are making the current dataset open and we welcome collaborations.
www.evocellnet.com/2026/03/mapp...
Predicting protein-protein interactions (PPIs) at proteome scale can take months with co-folding models due to the massive all-vs-all comparisons required.
We are excited to announce FlashPPI, a contrastive learning framework that predicts proteome wide physical interfaces in minutes. 1/🧵
Slava Ukraini.
Elon posted this yesterday. Just for fun I did it with my before/after cancer PET scans that I had already posted online. In the "before" scan Grok missed the massive tumor lighting up my liver, and in both scans it flagged a non-existent "area of concern" in my chest. Other than that, works great!
If you want to implement an AI tutor, you need to study it’s performance in naturalistic settings with the kinds of context windows that student learners will provide. You can’t begin with a well formulated question about a single topic.
Students don’t know how to formulate questions well at first.
@drkatemarvel.bsky.social Are there any good current equivalents of the red stack from David MacKay 2009? It lives rent-free in my head but would like to know if it still reflects current knowledge. www.withouthotair.com/c18/page_103...
This may be the most important paper ever published about NIH funded research.
www.science.org/doi/10.1126/...
The science is, in fact, clear. Good studies [1] consistently show a weak association between tylenol & autism. Those studies are very careful to say they can't establish whether this is causal. A huge Swedish study used a clever sibling design to address this, and showed zero causal effect [2].
Unexpectedly, @jurgjn.bsky.social found that running Alphafold3 predictions for protein interactions can yield ipTM scores that are more predictive of true interactions when run in pools of proteins instead of pairwise predictions. Presumably, this reflects some sort of "competition effect".
The war on science in the US is already having an effect on private sector research like AlphaFold. Bears repeating but the private sector builds on top of things created by academic research for the public good. This hurts everyone.
Some crystal structures show evidence of multiple stable conformations, such as loops or side chains with two distinct regions of density. These can be correctly modeled by MD, but not modern protein ensemble prediction tools like AlphaFlow or DiG (BioEmu not tested)
www.biorxiv.org/content/10.1...
I miss when the biggest controversy about mRNA was whether it correlated with protein or not.
LLMs can't take responsibility for their mistakes. When a human journalist puts their name on AI-written text, they take on that responsibility.
Increasingly I see inaccurate and badly written news stories authored by AI, many of which have actual humans listed as authors or editors.
The best first sentence of a grant application I've read was (paraphrasing), "Tool X is widely used to do task Y; we will make it accessible to people living with condition Z (13% of the population) so that it is more equitable and more widely used". Let's unpack why, because there's a lesson. 🧵
We’ve now explored this on all GPU models available on our cluster and some non-A100/H100 do not have the issue; seems like something nuanced that can hopefully be fixed:
github.com/google-deepm...
We’ve now explored this on all GPU models available on our cluster and some non-A100/H100 do not have the issue; seems like something nuanced that can hopefully be fixed:
github.com/google-deepm...
Jurgen in our group run AF3 locally after fiddling. It worked well on a A100 and produced some bad models on a lower end GPU. We don't have access to many A100s so unless we get it working on lower-end GPUs we won't be able to use this much.
The former finished without errors, but the output was noise (something similar to 100% spaghetti with AlphaFold2). The latter finished with the structure posted by @pedrobeltrao.bsky.social (2/2)
We tried running the example from README.md (“2PV7”) on a lower-end GPU with --flash_attention_implementation=xla (described in performance.md), and on an A100 GPU without that option. (1/2)
1. I'm conflicted about moving old tweets to Bluesky. I like the idea in general of using @blueark.app to preserve that content.
I have a few threads posted during COVID—for example, the original analysis of the IHME model right after it came out—that seem a valuable part of the scientific record.