Aya Abdelsalam Ismail's Avatar

Aya Abdelsalam Ismail

@asalamismail.bsky.social

Research Scientist @prescientdesign @Genentech Former PhD @umdcs ayaismail.com

43 Followers  |  19 Following  |  8 Posts  |  Joined: 07.12.2024
Posts Following

Posts by Aya Abdelsalam Ismail (@asalamismail.bsky.social)

Preview
Concept Bottleneck Language Models For protein design We introduce Concept Bottleneck Protein Language Models (CB-pLM), a generative masked language model with a layer where each neuron corresponds to an interpretable concept. Our architecture offers thr...

[8/n] More details in the paper: arxiv.org/abs/2411.06090 and our blog post ncfrey.substack.com/p/building-t....

You can also find the code on GitHub: github.com/prescient-de... and model weights on ๐Ÿค—.

12.12.2024 22:50 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image Post image

[7/n] Our architecture allows us to know what concepts the model learned and which concepts the model uses during inference by inspecting the weights of the final linear layer; this offers a way to debug and asses the model's quality.

12.12.2024 22:50 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image

[6/n] Interpretability: The concept bottleneck can be used to understand which concept the model uses to predict a certain amino acid. Reliably controlling model behavior: The concepts can be used as knobs to control the model's output.

12.12.2024 22:50 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

[5/n] We train a mask language model with up to 3 Billion parameters with a layer that directly encodes biophysical and biochemical concepts that biologists care about. These models match the performance of unconstrained masked language model.

12.12.2024 22:50 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

[4/n] Joint work with @ncfrey.bsky.social Tuomas Oikarinen @amywang1.bsky.social @juliusad.bsky.social @activelearner.bsky.social Taylor Joren @allen.bsky.social Hector Corrada Bravo @kyunghyuncho.bsky.social and Joseph Kleinhenz in @prescientdesign.bsky.social

12.12.2024 22:50 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

[3/n] In our concept bottleneck protein language model paper, we show that we can train the model with billions of parameters, with interpretability constraints, without performance degradation.

12.12.2024 22:50 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

[2/n] But the thing is, more often than not, we know beforehand what we want/expect our model to learn, especially in very well-studied domains like Biology. So, instead of playing the guessing game, we trained a model that explicitly learns different concepts that biologists care about.

12.12.2024 22:50 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

๐Ÿงต
[1/n] Does AlphaFold3 "know" biophysics and the physics of protein folding? Are protein language models (pLMs) learning coevolutionary patterns? You can try to guess the answer to these questions using mechanistic interpretability.

12.12.2024 22:50 โ€” ๐Ÿ‘ 36    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1