Gautam's Avatar

Gautam

@gautammalik.bsky.social

Research Assistant at University of Cambridge | Exploring deep learning in biology with big dreams of using AI to make drug discovery a little less complicated!🧬πŸ–₯️

621 Followers  |  275 Following  |  37 Posts  |  Joined: 17.11.2024  |  1.8131

Latest posts by gautammalik.bsky.social on Bluesky

[9/9]

Appreciate any advice, pointers to relevant papers, or even β€œdon’t do this” cautionary tales.
Thanks in advance!

#transformers #sparsity #maskedmodeling #deeplearning #symbolicAI #mlresearch #attentionmodels #structureddata

14.06.2025 05:22 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

[8/9]

C) Local patching: training on smaller, denser subregions of the matrix
D) Contrastive or denoising autoencoder approaches instead of MLM
E) Treating the task as a kind of link prediction or structured matrix completion
F) Something entirely different?

14.06.2025 05:22 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

[7/9]

B) Loss weighing:
Also tried tweaking the loss weights to prioritize correct prediction of rare relation types.
But it didn’t seem to help either, possibly because the model just ends up ignoring the background (.) altogether.

14.06.2025 05:22 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

[6/9]

A) Biased masking:
I've tried biased masking (favoring the rare relation tokens), but it’s not helping much.

Can biased masking still work when the background class is that overwhelming? Or does it just drown out the signal anyway?

14.06.2025 05:22 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

[5/9]
I'm wondering whether standard masked modeling is the right fit for this format, or whether it needs adjustment. Some options I'm exploring or considering:

14.06.2025 05:22 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

[4/9]

The challenge:
The matrix is highly sparse, most entries are the neutral "no relation" token. When a meaningful relation is masked, it is often surrounded by these neutral tokens. I'm concerned that the model may struggle to learn meaningful context, since most of what it sees is neutral.

14.06.2025 05:22 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

[3/9]

I’m using a masked modeling objective, where random entries are masked, and the model learns to recover the original token based on the rest of the matrix. The goal is for the model to learn latent structure in how these symbolic relationships are distributed.

14.06.2025 05:22 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

[2/9]

The relation types are drawn from a small, discrete vocabulary, and the "no relation" case is marked with a neutral symbol (e.g., .). Importantly, this "no relation" doesn’t necessarily mean irrelevance. I’m not sure whether it should be treated as informative context or just noise.

14.06.2025 05:22 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

[1/9]

I’m working on a transformer-based model over a 2D symbolic matrix where each row and column represents elements from two discrete sets. Each cell contains a token representing a relationship type between the corresponding pair, or a default token when no known relation exists.

14.06.2025 05:22 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Quick question for anyone doing transformer stuff in comp bio/chem or structured data!

Trying out masked modeling on a sparse setup, but not sure I'm going about it right. Curious how others have tackled this.

14.06.2025 05:22 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

The surprising ineffectiveness of molecular dynamics coordinates for predicting bioactivity with machine learning

Checks whether MD-derived 3D information helps for bioactivity and target predictions over just static 3D information. Often no 3D info at all is best..

P: chemrxiv.org/engage/chemr...

10.01.2025 07:22 β€” πŸ‘ 17    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0

I like this perspectiveβ€”it should be about evolution through iterations, rather than expecting the best-evolved algorithm right away. Everyone seems inspired by AlphaFold’s story, looking for a similar breakthrough in their domain, but maybe the focus should be on steady progress.

09.12.2024 10:46 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

It’s definitely a challenging but fascinating area and I’d love to talk more about this!

09.12.2024 05:30 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

But it isn’t as trivial as it sounds, right? Is it just about using some vector embeddings from domain knowledge and adding them to the model? I’m wondering, is it the lack of collaboration between scientists and AI/ML experts that’s hindering this kind of development?

09.12.2024 05:14 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This debate might be intense, but it’s moments like this that make me curious about where we’re all headed. Science evolves through friction, right?

09.12.2024 04:57 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

What excites me, though, is the idea I keep hearing: can we combine the best of both worlds? Is that even possible? Are we talking about something like machine-learned potentials in MD simulations, or is it deeper than that? Please, help me out to gain some more perspective!

09.12.2024 04:57 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

As a young researcher, I can’t help but notice how scientists using physics-based methods can sometimes show a bias. It’s clear they have their roots, but there's no denying that AI/ML methods come with their own set of caveats, many of which are tough to even recognize.

09.12.2024 04:57 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

A young researcher’s perspective on the #DiffDock discussion between @gcorso.bsky.social and @prof-ajay-jain.bsky.social:

Honestly, I’m feeling both thrilled and a little lost. As someone new to the field, I can’t help but reflect on what this means for the future of docking and AI/ML in science.

09.12.2024 04:57 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

Impressive work by @franknoe.bsky.social and team! A pragmatic tour-de-force combining experimental and predicted protein structures, MD simulations and experimental stability data to sample conformational ensembles of proteins. Think AlphaFold, but capturing multiple free energy minima.

08.12.2024 11:21 β€” πŸ‘ 39    πŸ” 8    πŸ’¬ 1    πŸ“Œ 0
Post image

When the size of test data is 5 compounds and accuracy is 100% 😌

06.12.2024 21:15 β€” πŸ‘ 15    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Introducing Synthon Searching – RDKit blog Searching unreasonably large chemical spaces in reasonable amounts of time.

There's a new #RDKit blog post introducing some new functionality that I'm really excited about: doing efficient substructure and similarity searches in very large chemical libraries:
greglandrum.github.io/rdkit-blog/p...
#ChemSky

03.12.2024 07:21 β€” πŸ‘ 97    πŸ” 29    πŸ’¬ 2    πŸ“Œ 2

#CASP16 results are in! Template-based VFold seems to be lead method for nucleic acid structure prediction! AlphaFold2 and 3 still seem to be best methods for protein monomer and complex prediction.

30.11.2024 22:28 β€” πŸ‘ 86    πŸ” 23    πŸ’¬ 2    πŸ“Œ 1
GitHub - gautammalik-git/BindAxTransformer: BindAxTransformer is a transformer-based model trained on protein-ligand interactions using self-supervised learning. This repository provides a detailed im... BindAxTransformer is a transformer-based model trained on protein-ligand interactions using self-supervised learning. This repository provides a detailed implementation and educational resource, sh...

It’s not real-world ready but a good foundation to explore. And yes, science does need a protein emoji!

github.com/gautammalik-...

22.11.2024 19:46 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

To wrap up, I’m curious about your thoughts on the future of docking models. Will the next breakthrough be GNN-based, transformer-based, or something like generative models (e.g., Diffusion)? I'd love to hear your opinions on what direction the field is heading. Let me know your thoughts!

22.11.2024 19:46 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
GitHub - gautammalik-git/BindAxTransformer: BindAxTransformer is a transformer-based model trained on protein-ligand interactions using self-supervised learning. This repository provides a detailed im... BindAxTransformer is a transformer-based model trained on protein-ligand interactions using self-supervised learning. This repository provides a detailed implementation and educational resource, sh...

It’s not real-world ready but a good foundation to explore. And yes, science does need a protein emoji!

github.com/gautammalik-...

22.11.2024 19:46 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

I’ve created a GitHub repo for building a pre-trained BERT model on protein-ligand data. It’s designed for those seeking a starting point for transformer architecture with protein-ligand complex data, as I’ve extensively focused on the math behind it.

22.11.2024 19:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

To wrap up, I’m curious about your thoughts on the future of docking models. Will the next breakthrough be GNN-based, transformer-based, or something like generative models (e.g., Diffusion)? I'd love to hear your opinions on what direction the field is heading. Let me know your thoughts!

22.11.2024 19:46 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

2. Iterative Updates:
The ligand atoms keep moving into formation, adjusting their positions iteratively.

3. Final Coordinates:
After several rounds, the model spits out the final 3D coordinates of the ligand atoms.

And there you have it, Dockformer in action!

22.11.2024 19:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

1. Intra- and Intermolecular Modules:
Two types of attention layers come into play:
a. Intra-ligand attention: Helps the ligand atoms organize themselves correctly.
b. Ligand-protein cross-attention: Helps the ligand atoms adjust based on the protein’s pocket geometry.

22.11.2024 19:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Step 4: The Grand Finale – Structure Generation

Now that Dockformer understands the molecular interactions, it’s time to predict the 3D coordinates of the ligand atoms. This happens in the structure module.

What happens here?

22.11.2024 19:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@gautammalik is following 20 prominent accounts