Iโd add data/task understanding as a separate mid layer. Most papers I know break in the transition of high to mid.
12.08.2025 19:36 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0@cottascience.bsky.social
Postdoc fellow at Vector Institute; Machine Learning & Causal Inference; From BH๐บ๐ง๐ท in TO ๐จ๐ฆ http://cottascience.github.io
Iโd add data/task understanding as a separate mid layer. Most papers I know break in the transition of high to mid.
12.08.2025 19:36 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0the goat of brazilian music w/ the best of (current) american music
www.youtube.com/watch?v=jFUh...
This is why I personally love TMLR. If it's correct and well-written let's publish. The interesting papers are the ones the community actively recognizes in their work, e.g. citing, extending, turning into products, etc. (independent process of publication).
30.07.2025 23:43 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0I agree with most of your thread, but classifying "uninteresting work" is quite hard nowadays. Papers became this "hype-seeking" game, where out of the 10 hyped papers of the month, at most 1 survives further investigation of the results. And even if we think we're immune to this, what is interest?
30.07.2025 23:43 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0I loved this new preprint by Lourie/Hu/ @kyunghyuncho.bsky.social . If you really wanna convince someone youre training a foundation model, or proposing better methodology, loss scaling laws aren't enough. It has to be tied w/ downstream performance. it shouldn't be vibes
arxiv.org/abs/2507.00885
We're at ICML, drop us a line if you're excited about this direction.
๐ Paper: arxiv.org/abs/2507.02083
๐ป Code: github.com/h4duan/SciGym
๐ Website: h4duan.github.io/scigym-bench...
๐๏ธ Dataset: huggingface.co/datasets/h4d...
I'm very excited about our new work: SciGym. How can we scale scientific agents' evaluation?
TLDR; Systems biologists have spent decades encoding biochemical networks (metabolic pathways, gene regulation, etc.) into machine-runnable systems. We can use these as "dry labs" to test AI agents!
Also, I see ITCS more like a โout of the boxโ, โboldโ idea or even new area, I donโt see the papers having simplicity as a goal, but just my experience.
30.06.2025 00:48 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0Mhm, I agree with the idealistic part, I certainly have seen the same. But I know quite a few papers that are aligned w the call, tbh this happens in any venue. I think the message and the openness to this kind of paper is important though
30.06.2025 00:46 โ ๐ 0 ๐ 0 ๐ฌ 2 ๐ 0I wish we had an ML equivalent of SOSA (Symposium On Simplicity in Algorithms). "simpler algorithms manifest a better understanding of the problem at hand; they are more likely to be implemented and trusted by practitioners; they are more easily taught" www.siam.org/conferences-....
29.06.2025 17:04 โ ๐ 3 ๐ 0 ๐ฌ 1 ๐ 0this is not my area, but if you think of it in terms of a randomized algorithm (BPP,PP), the hard part is usually the generation, at least for the algorithms we tend to design. e.g. Schwartz-Zippel Lemma. (Although in theory you can have the "hard part" in verification for any problem)
14.06.2025 16:17 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0It takes 1 terrible paper for knowledgeable people to stop reading all your papers, this risk is often not accounted for
09.06.2025 20:01 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0Maybe check Cat s22, it gives you the basics, eg whatsapp+gps and nothing else
08.06.2025 19:40 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0Please check out our new approach to modeling somatic mutation signatures.
DAMUTA has independent Damage and Misrepair signatures whose activities are more interpretable and more predictive of DNA repair defects, than COSMIC SBS signatures ๐งฌ๐ฅ๏ธ๐งช
www.biorxiv.org/content/10.1...
it just sounds like "see you three times" ;) it's like some people named "Sinho" that is often confused with portuguese/brazilians; but from what I heard it's a variation of Singh (not sure though)
30.05.2025 23:02 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0One simple way to reason about this: treatment assignment guarantees you have the right P(T|X). Self-selection changes P(X), a different quantity. Looking at your IPW estimator you can see that changing P(X) will bias regardless of P(T|X).
18.04.2025 15:08 โ ๐ 3 ๐ 2 ๐ฌ 0 ๐ 0I haven't been up to date with the model collapse literature, but it's crazy the amount of papers that consider the case where people only reuse data from the model distribution. This never happens, there's always some human curation or conditioning that yields some type of "real-world, new, data".
13.04.2025 18:26 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0this general idea of using an external world/causal model given by a human and using the LM only for inference is really cool ---it's also the insight behind our work in NATURAL. Do you guys think it's possible to write a more general software for the interface DAG->LLM_inference->estimate?
12.04.2025 18:27 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0This is my favourite "graph paper" of the last 1 or 2 years. We also need to start including non-NN baselines, e.g. fingerprints+catboost ---if the goal is real-world impact and not getting it published asap. I also recommend following @wpwalters.bsky.social's blog.
arxiv.org/abs/2502.14546
Unbelievable news.
Pancreatic is one of the deadliest cancers.
New paper shows personalized mRNA vaccines can induce durable T cells that attack pancreatic cancer, with 75% of patients cancer free at three yearsโfar, far better than standard of care.
www.nature.com/articles/s41...
Oh gotcha. I think itโs just super cheesy to quote feynman at this point haha but itโs a good philosophy to embrace
20.02.2025 01:14 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0In what contexts do you think itโs misused? Just curious, Iโm a big fan and might be overusing it ๐
20.02.2025 01:11 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0After 6+ months in the making and over a year of GPU compute, we're excited to release the "Ultra-Scale Playbook": hf.co/spaces/nanot...
A book to learn all about 5D parallelism, ZeRO, CUDA kernels, how/why overlap compute & coms with theory, motivation, interactive plots and 4000+ experiments!
if you're feeling uninspired and getting nan's everywhere, you can give your codebase, describe the problem and ask for suggestions to try or debug. I think of it more as a debugger assistant than a code generator.
19.02.2025 15:02 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0I've always hated the "reasoning models" for code assistance since I think the most useful application of LLMs is really writing the boring helper functions and letting us focus on the hard work. However, I found o3 to be particularly useful when debugging ML code, e.g., 1/2
19.02.2025 15:02 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0if you remove one at a time you get reconstruction gnns ๐ proceedings.neurips.cc/paper/2021/h...
14.02.2025 23:00 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0100%. Also, sometimes the use/task might be the same, but the user's notion of bias can vary. Eg people might expect group or individual notions of fairness.
14.02.2025 12:03 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0I wouldnโt call open-source democratic in this case. The models are free but the inference compute isnโt. Maybe democratic in the sense of a liberal democracy, but not in terms of accessibility. Agree w the rest though!
26.01.2025 20:51 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0The whole DeepSeek-R1 thing just highlights computer science's main feature: you can do A LOT with a small team and some (limited) resources. This is how we've been able to scale innovation and why free software is important.
25.01.2025 16:51 โ ๐ 6 ๐ 0 ๐ฌ 0 ๐ 0This is an amazing resource (of resources) for machine learners
24.01.2025 14:14 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0