Leonardo Cotta's Avatar

Leonardo Cotta

@cottascience.bsky.social

Postdoc fellow at Vector Institute; Machine Learning & Causal Inference; From BH๐Ÿ”บ๐Ÿ‡ง๐Ÿ‡ท in TO ๐Ÿ‡จ๐Ÿ‡ฆ http://cottascience.github.io

1,003 Followers  |  247 Following  |  55 Posts  |  Joined: 20.09.2023  |  2.0542

Latest posts by cottascience.bsky.social on Bluesky

Iโ€™d add data/task understanding as a separate mid layer. Most papers I know break in the transition of high to mid.

12.08.2025 19:36 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Milton Nascimento & esperanza spalding: Tiny Desk (Home) Concert
YouTube video by NPR Music Milton Nascimento & esperanza spalding: Tiny Desk (Home) Concert

the goat of brazilian music w/ the best of (current) american music
www.youtube.com/watch?v=jFUh...

09.08.2025 15:06 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

This is why I personally love TMLR. If it's correct and well-written let's publish. The interesting papers are the ones the community actively recognizes in their work, e.g. citing, extending, turning into products, etc. (independent process of publication).

30.07.2025 23:43 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

I agree with most of your thread, but classifying "uninteresting work" is quite hard nowadays. Papers became this "hype-seeking" game, where out of the 10 hyped papers of the month, at most 1 survives further investigation of the results. And even if we think we're immune to this, what is interest?

30.07.2025 23:43 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Scaling Laws Are Unreliable for Downstream Tasks: A Reality Check Downstream scaling laws aim to predict task performance at larger scales from pretraining losses at smaller scales. Whether this prediction should be possible is unclear: some works demonstrate that t...

I loved this new preprint by Lourie/Hu/ @kyunghyuncho.bsky.social . If you really wanna convince someone youre training a foundation model, or proposing better methodology, loss scaling laws aren't enough. It has to be tied w/ downstream performance. it shouldn't be vibes
arxiv.org/abs/2507.00885

26.07.2025 22:43 โ€” ๐Ÿ‘ 5    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

We're at ICML, drop us a line if you're excited about this direction.

๐Ÿ“„ Paper: arxiv.org/abs/2507.02083
๐Ÿ’ป Code: github.com/h4duan/SciGym
๐ŸŒ Website: h4duan.github.io/scigym-bench...
๐Ÿ—‚๏ธ Dataset: huggingface.co/datasets/h4d...

16.07.2025 20:16 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

I'm very excited about our new work: SciGym. How can we scale scientific agents' evaluation?
TLDR; Systems biologists have spent decades encoding biochemical networks (metabolic pathways, gene regulation, etc.) into machine-runnable systems. We can use these as "dry labs" to test AI agents!

16.07.2025 20:16 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Also, I see ITCS more like a โ€œout of the boxโ€, โ€œboldโ€ idea or even new area, I donโ€™t see the papers having simplicity as a goal, but just my experience.

30.06.2025 00:48 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Mhm, I agree with the idealistic part, I certainly have seen the same. But I know quite a few papers that are aligned w the call, tbh this happens in any venue. I think the message and the openness to this kind of paper is important though

30.06.2025 00:46 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

I wish we had an ML equivalent of SOSA (Symposium On Simplicity in Algorithms). "simpler algorithms manifest a better understanding of the problem at hand; they are more likely to be implemented and trusted by practitioners; they are more easily taught" www.siam.org/conferences-....

29.06.2025 17:04 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

this is not my area, but if you think of it in terms of a randomized algorithm (BPP,PP), the hard part is usually the generation, at least for the algorithms we tend to design. e.g. Schwartz-Zippel Lemma. (Although in theory you can have the "hard part" in verification for any problem)

14.06.2025 16:17 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

It takes 1 terrible paper for knowledgeable people to stop reading all your papers, this risk is often not accounted for

09.06.2025 20:01 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Maybe check Cat s22, it gives you the basics, eg whatsapp+gps and nothing else

08.06.2025 19:40 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Damage and Misrepair Signatures: Compact Representations of Pan-cancer Mutational Processes Mutational signatures of single-base substitutions (SBSs) characterize somatic mutation processes which contribute to cancer development and progression. However, current mutational signatures do not ...

Please check out our new approach to modeling somatic mutation signatures.

DAMUTA has independent Damage and Misrepair signatures whose activities are more interpretable and more predictive of DNA repair defects, than COSMIC SBS signatures ๐Ÿงฌ๐Ÿ–ฅ๏ธ๐Ÿงช

www.biorxiv.org/content/10.1...

03.06.2025 00:34 โ€” ๐Ÿ‘ 41    ๐Ÿ” 17    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

it just sounds like "see you three times" ;) it's like some people named "Sinho" that is often confused with portuguese/brazilians; but from what I heard it's a variation of Singh (not sure though)

30.05.2025 23:02 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

One simple way to reason about this: treatment assignment guarantees you have the right P(T|X). Self-selection changes P(X), a different quantity. Looking at your IPW estimator you can see that changing P(X) will bias regardless of P(T|X).

18.04.2025 15:08 โ€” ๐Ÿ‘ 3    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

I haven't been up to date with the model collapse literature, but it's crazy the amount of papers that consider the case where people only reuse data from the model distribution. This never happens, there's always some human curation or conditioning that yields some type of "real-world, new, data".

13.04.2025 18:26 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

this general idea of using an external world/causal model given by a human and using the LM only for inference is really cool ---it's also the insight behind our work in NATURAL. Do you guys think it's possible to write a more general software for the interface DAG->LLM_inference->estimate?

12.04.2025 18:27 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Position: Graph Learning Will Lose Relevance Due To Poor Benchmarks While machine learning on graphs has demonstrated promise in drug design and molecular property prediction, significant benchmarking challenges hinder its further progress and relevance. Current bench...

This is my favourite "graph paper" of the last 1 or 2 years. We also need to start including non-NN baselines, e.g. fingerprints+catboost ---if the goal is real-world impact and not getting it published asap. I also recommend following @wpwalters.bsky.social's blog.
arxiv.org/abs/2502.14546

24.03.2025 17:21 โ€” ๐Ÿ‘ 7    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Unbelievable news.

Pancreatic is one of the deadliest cancers.

New paper shows personalized mRNA vaccines can induce durable T cells that attack pancreatic cancer, with 75% of patients cancer free at three yearsโ€”far, far better than standard of care.

www.nature.com/articles/s41...

27.02.2025 17:03 โ€” ๐Ÿ‘ 7329    ๐Ÿ” 1944    ๐Ÿ’ฌ 142    ๐Ÿ“Œ 321

Oh gotcha. I think itโ€™s just super cheesy to quote feynman at this point haha but itโ€™s a good philosophy to embrace

20.02.2025 01:14 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

In what contexts do you think itโ€™s misused? Just curious, Iโ€™m a big fan and might be overusing it ๐Ÿ˜…

20.02.2025 01:11 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
The Ultra-Scale Playbook - a Hugging Face Space by nanotron The ultimate guide to training LLM on large GPU Clusters

After 6+ months in the making and over a year of GPU compute, we're excited to release the "Ultra-Scale Playbook": hf.co/spaces/nanot...

A book to learn all about 5D parallelism, ZeRO, CUDA kernels, how/why overlap compute & coms with theory, motivation, interactive plots and 4000+ experiments!

19.02.2025 18:10 โ€” ๐Ÿ‘ 182    ๐Ÿ” 52    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 5

if you're feeling uninspired and getting nan's everywhere, you can give your codebase, describe the problem and ask for suggestions to try or debug. I think of it more as a debugger assistant than a code generator.

19.02.2025 15:02 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

I've always hated the "reasoning models" for code assistance since I think the most useful application of LLMs is really writing the boring helper functions and letting us focus on the hard work. However, I found o3 to be particularly useful when debugging ML code, e.g., 1/2

19.02.2025 15:02 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Reconstruction for Powerful Graph Representations

if you remove one at a time you get reconstruction gnns ๐Ÿ™ƒ proceedings.neurips.cc/paper/2021/h...

14.02.2025 23:00 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

100%. Also, sometimes the use/task might be the same, but the user's notion of bias can vary. Eg people might expect group or individual notions of fairness.

14.02.2025 12:03 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

I wouldnโ€™t call open-source democratic in this case. The models are free but the inference compute isnโ€™t. Maybe democratic in the sense of a liberal democracy, but not in terms of accessibility. Agree w the rest though!

26.01.2025 20:51 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

The whole DeepSeek-R1 thing just highlights computer science's main feature: you can do A LOT with a small team and some (limited) resources. This is how we've been able to scale innovation and why free software is important.

25.01.2025 16:51 โ€” ๐Ÿ‘ 6    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

This is an amazing resource (of resources) for machine learners

24.01.2025 14:14 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

@cottascience is following 20 prominent accounts