Bharath Ramsundar's Avatar

Bharath Ramsundar

@rbhar90.bsky.social

Founder and CEO of Deep Forest Sciences. Lead Developer of DeepChem. AI for Science Researcher.

665 Followers  |  212 Following  |  3 Posts  |  Joined: 14.09.2023  |  1.343

Latest posts by rbhar90.bsky.social on Bluesky

Preview
A Deep Generative Model for the Inverse Design of Transition Metal Ligands and Complexes Deep generative models yielding transition metal complexes (TMCs) remain scarce despite the key role of these compounds in industrial catalytic processes, anticancer therapies, and the energy transition. Compared to drug discovery within the chemical space of organic molecules, TMCs pose further challenges, including the encoding of chemical bonds of higher complexity and the need to optimize multiple properties. In this work, we developed a generative model for the inverse design of transition metal ligands and complexes, based on the junction tree variational autoencoder (JT-VAE). After implementing a SMILES-based encoding of the metal–ligand bonds, the model was trained with the tmQMg-L ligand library, allowing for the generation of thousands of novel, highly diverse monodentate (κ1) and bidentate (κ2) ligands, including imines, phosphines, and carbenes. Further, the generated ligands were labeled with two target properties reflecting the stability and electron density of the associated homoleptic iridium TMCs: the HOMO–LUMO gap (ϵ) and the charge of the metal center (qIr). This data was used to implement a conditional model that generated ligands from a prompt, with the single- or dual-objective of optimizing either or both the ϵ and qIr properties and allowing for chemical interpretation based on the optimization trajectories. The optimizations also had an impact on other chemical properties, including ligand dissociation energies and oxidative addition barriers. A similar model was implemented to condition ligand generation by solubility and steric bulk.

Strandgaard et al. train a metal-aware junction-tree VAE on 30k ligands, then steer its latent space to tailor Ir-complex gaps and charges, generating tens of thousands of novel, synthetically accessible candidates—generative ML can navigate transition-metal space. pubs.acs.org/doi/10.1021/...

25.04.2025 09:46 — 👍 2    🔁 1    💬 0    📌 0
Data collected with the new sequencing platform HyDrop v2 is shown. First, a schematic overview of the bead batches of the microfluidic beads is followed by a tSNE and a barplot showing the costs in comparison to 10x Genomics. 
Then, a track of mouse data (cortex) is shown together with nucleotide contribution scores in the FIRE enhancer in microglia. Here, the HyDrop and 10x based models show the same contributions. 
On the right, the Drosophila embryo collection is explained; in the paper HyDrop v2 and 10x data are compared to sciATAC data. Then, a nucleotide contribution score is also shown, whereas HyDrop v2 and 10x models show the same contribution, just as in mouse.

Data collected with the new sequencing platform HyDrop v2 is shown. First, a schematic overview of the bead batches of the microfluidic beads is followed by a tSNE and a barplot showing the costs in comparison to 10x Genomics. Then, a track of mouse data (cortex) is shown together with nucleotide contribution scores in the FIRE enhancer in microglia. Here, the HyDrop and 10x based models show the same contributions. On the right, the Drosophila embryo collection is explained; in the paper HyDrop v2 and 10x data are compared to sciATAC data. Then, a nucleotide contribution score is also shown, whereas HyDrop v2 and 10x models show the same contribution, just as in mouse.

Our new preprint is out! We optimized our open-source platform, HyDrop (v2), for scATAC sequencing and generated new atlases for the mouse cortex and Drosophila embryo with 607k cells. Now, we can train sequence-to-function models on data generated with HyDrop v2!
www.biorxiv.org/content/10.1...

04.04.2025 08:52 — 👍 55    🔁 25    💬 2    📌 2
Preview
Construction of Arithmetic Teichmuller Spaces IV: Proof of the abc-conjecture This is a continuation of my work on Arithmetic Teichmuller Spaces developed in the present series of papers. In this paper, I show that the Theory of Arithmetic Teichmuller Spaces leads, using Shinic...

An updated version of Kirti Joshi's claimed proof of the ABC conjecture is out arxiv.org/abs/2403.10430

05.03.2025 11:04 — 👍 1    🔁 0    💬 0    📌 0
Preview
Deep-Learning Based Docking Methods: Fair Comparisons to Conventional Docking Workflows The diffusion learning method, DiffDock, for docking small-molecule ligands into protein binding sites was recently introduced. Results included comparisons to more conventional docking approaches, wi...

While dockers are keep docking (now with diffusion and AI😀), Pat Walters and Ajay Jain show sobering assessment of ‘perceived’ accuracy of such methods! Diffdock shots fired 🔥 The conclusion is totally worth reading in full! #chemsky #compchemsky
arxiv.org/abs/2412.02889

06.12.2024 05:33 — 👍 29    🔁 4    💬 1    📌 0
Preview
Scalable emulation of protein equilibrium ensembles with generative deep learning Following the sequence and structure revolutions, predicting the dynamical mechanisms of proteins that implement biological function remains an outstanding scientific challenge. Several experimental t...

Another deep-learning approach to sample conformational ensembles of proteins: BioEmu, out of Microsoft Research.
www.biorxiv.org/content/10.1...

06.12.2024 06:54 — 👍 36    🔁 12    💬 3    📌 0
Preview
Hypothalamic deep brain stimulation augments walking after spinal cord injury - Nature Medicine Whole-brain anatomical and activity surveys identify the lateral hypothalamus as a key driver of recovery from spinal cord injury, leading to a deep brain stimulation therapy that augments the recover...

Hypothalamic deep brain stimulation augments walking after spinal cord injury (SCI). #NatureMedicine #medsky #scisky

"Targeting specific brain regions to maximize the engagement of spinal cord-projecting neurons in the recovery of neurological functions after SCI."

www.nature.com/articles/s41...

03.12.2024 05:28 — 👍 50    🔁 18    💬 0    📌 0

There are 2 mistakes you can make about LLMs:

① Thinking everything LLMs say is correct, they can reason, and with a bit more scale they’ll get us to superintelligence

② Thinking LLMs are good for almost nothing—they are FAR better at all #NLProc tasks than previous methods

12.10.2024 22:38 — 👍 58    🔁 10    💬 1    📌 1
A Practical Guide to Large Scale Docking Estimated Reading Time: 4 minutes

On this week's, "Deep into the Forest," we cover the practical aspects of running a large scale docking screen such as cleaning up the binding pocket, dealing with conformations, and more deepforest.substack.com/p/a-practica...

23.10.2023 19:18 — 👍 0    🔁 0    💬 0    📌 0

This week on "Deep into the Forest," we explore the use of AlphaFold2 to re-score antibody-antigen complex structure predictions. deepforest.substack.com/p/using-alph...

19.10.2023 03:45 — 👍 5    🔁 0    💬 0    📌 0

@rbhar90 is following 20 prominent accounts