Diego Calanzone's Avatar

Diego Calanzone

@diadochus.bsky.social

ยซย artificia docuit famesย ยป ๐Ÿ“– deep learning, reasoning ๐Ÿงช drug design @Mila_Quebec ๐Ÿ›๏ธ AI grad @UniTrento halixness.github.io

549 Followers  |  679 Following  |  16 Posts  |  Joined: 15.11.2024  |  1.8349

Latest posts by diadochus.bsky.social on Bluesky

Preview
GitHub - ddidacus/mol-moe: Repository for: "Training Preference-Guided Routers for Molecule Generation" Repository for: "Training Preference-Guided Routers for Molecule Generation" - ddidacus/mol-moe

Special thanks to Biogen and CIFAR for the support, and
@proceduralia.bsky.social + @pierrelucbacon.bsky.social
for their valuable supervision, and to the entire Mila community for their feedback, discussions, and support. Code, paper, and models are public: github.com/ddidacus/mol...

20.02.2025 19:43 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Mol-MoE improves with more property experts with a larger gain than classic merging and overall, it achieves the highest scores. Simple reward scalarization here does not work. We aim at further calibrating Mol-MoE and testing the performance on larger sets of objectives.

20.02.2025 19:43 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

The model we obtain does achieve a smaller mean absolute error in generating compounds according to the provided properties, surpassing the alternative methods. Arguably, the learned routing functions can tackle task interference.

20.02.2025 19:43 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

But the relationship between interpolation coefficients and properties isnโ€™t strictly linear, needing a calibration function. Mol-MoE addresses this by training only the routers to predict optimal merging weights from prompts, enabling more precise control and less interference.

20.02.2025 19:43 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Think, think, think... what if we trained experts on single properties separately and leveraged model merging techniques to obtain a multi-property model? We re-implement rewarded soups and obtain a robust baseline capable of generating high-quality, out-of-distribution samples.

20.02.2025 19:43 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

In our ablation studies, instruction-tuned models struggle with higher property values due to lack of explicit optimization. Even RL fine-tuning on multiple objectives can hit performance plateaus or declines, and balancing objectives requires re-training, limiting steerability.

20.02.2025 19:43 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Drug discovery inherently involves multi-objective optimization, requiring candidate molecules to not only bind effectively to target proteins, triggering a specific function, but also to meet safety and compatibility criteria to become drugs. Is supervised learning sufficient?

20.02.2025 19:43 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Molecule sequence models learn vast molecular spaces, but how to navigate them efficiently? We explored multi-objective RL, SFT, merging, but these fall short in balancing control and diversity. We introduce **Mol-MoE**: a mixture of experts for controllable molecule generation๐Ÿงต

20.02.2025 19:43 โ€” ๐Ÿ‘ 1    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Logically Consistent Language Models via Neuro-Symbolic Integration Large language models (LLMs) are a promising venue for natural language understanding and generation. However, current LLMs are far from reliable: they are prone to generating non-factual information ...

Finally, LOgically COnsistent (LoCo) LLaMas can outperform solver-based baselines and SFT! I thank @nolovedeeplearning.bsky.social and @looselycorrect.bsky.social for the guidance in realizing this project, get in touch or come to chat in Singapore!

arxiv.org/abs/2409.13724

29.01.2025 23:41 โ€” ๐Ÿ‘ 6    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Our method makes LLaMa's knowledge more consistent to any given knowledge graph, by seeing only a portion of it! It can transfer logical rules to similar or derived concepts. As proposed by @ekinakyurek.bsky.social et al., you can use a LLM-generated KB to reason over its knowledge.

29.01.2025 23:41 โ€” ๐Ÿ‘ 7    ๐Ÿ” 1    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Post image

Yes! We propose to leverage the Semantic Loss as a regularizer: it maximizes the likelihood of world (model) assignments satisfying any given logical rule. We thus include efficient solvers in the training pipeline to efficiently perform model counting on the LLM's own beliefs.

29.01.2025 23:41 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Various background works focus on instilling single consistency rules, e.g. A and not A can't be both true (negation, Burns et al.), A true and A implies B, thus B true (modus ponens). Can we derive a general objective function that combines logical rules dynamically?

29.01.2025 23:41 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

๐Ÿฅณ "Logically Consistent Language Models via Neuro-Symbolic Integration" just accepted at #ICLR2025!
We focus on instilling logical rules in LLMs with an efficient loss, leading to higher factuality & (self) consistency. How? ๐Ÿงต

29.01.2025 23:41 โ€” ๐Ÿ‘ 13    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

RNA FISH -> a fish

03.12.2024 13:13 โ€” ๐Ÿ‘ 56    ๐Ÿ” 3    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

used Cursor (based on claude sonnet 3.5) over VS Code for a week now. Early feedback:
โœ”๏ธ great to parallelize training and inference
โœ”๏ธ multi-file context, can easily setup hyperparam sweeps
โœ”๏ธ great to visualize results with high level guidance. Welcome spider plots!

29.11.2024 17:34 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Announcing the NeurIPS 2024 Test of Time Paper Awardsย  โ€“ NeurIPS Blog

Test of Time Paper Awards are out! 2014 was a wonderful year with lots of amazing papers. That's why, we decided to highlight two papers: GANs (@ian-goodfellow.bsky.social et al.) and Seq2Seq (Sutskever et al.). Both papers will be presented in person ๐Ÿ˜

Link: blog.neurips.cc/2024/11/27/a...

27.11.2024 15:48 โ€” ๐Ÿ‘ 110    ๐Ÿ” 14    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2

I guess it also depends on the field/subfield?

23.11.2024 20:50 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

researchers on cancer, message me: Iโ€™d like to know about your work, your research questions!

23.11.2024 20:49 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
[EEML'24] Sander Dieleman - Generative modelling through iterative refinement
YouTube video by EEML Community [EEML'24] Sander Dieleman - Generative modelling through iterative refinement

While we're starting up over here, I suppose it's okay to reshare some old content, right?

Here's my lecture from the EEML 2024 summer school in Novi Sad๐Ÿ‡ท๐Ÿ‡ธ, where I tried to give an intuitive introduction to diffusion models: youtu.be/9BHQvQlsVdE

Check out other lectures on their channel as well!

19.11.2024 09:57 โ€” ๐Ÿ‘ 115    ๐Ÿ” 12    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 0

I've created an initial Grumpy Machine Learners starter park. If you think you're grumpy and you "do machine learning", nominate yourself. If you're on the list, but don't think you are grumpy, then take a look in the mirror.

go.bsky.app/6ddpivr

18.11.2024 14:40 โ€” ๐Ÿ‘ 418    ๐Ÿ” 55    ๐Ÿ’ฌ 124    ๐Ÿ“Œ 15

@diadochus is following 20 prominent accounts