How to fake a robotics result: a short blog post listing many sins which annoy me (many of which I am guilty of from time to time, to be fair)
open.substack.com/pub/itcanthi...
@chatgtp.bsky.social
Machine learning for molecular biology. ELLIS PhD student at Fabian Theis lab. EPFL alumnus.
How to fake a robotics result: a short blog post listing many sins which annoy me (many of which I am guilty of from time to time, to be fair)
open.substack.com/pub/itcanthi...
Haha @cpaxton.bsky.social is on fire:
open.substack.com/pub/itcanthi...
Some useful tips in there even for non-roboticists looking to make their paper artificially look good
e-ink is a lifesaver for anyone with cybersickness
02.02.2026 21:55 — 👍 0 🔁 0 💬 0 📌 0And for a lot of Gen X journalists and academics, the answer to (a) — assuming existing skills and plans — is legit "no." AI can be useful, for sure, but the paths it differentially advantages are not the paths where they have accumulated momentum, expertise, and social capital. +
19.01.2026 00:18 — 👍 14 🔁 2 💬 1 📌 0Big thanks to Olga Novitskaia for spotting the issue!
Link to the broken history mapper: useast.ensembl.org/Homo_sapiens...
A quick story on how we matched genes across two datasets with different Ensembl versions.
1. There must be a tool out there. Ensembl ID History converter ofc!
2. Doesn't match Ensembl search outcomes due to a bug
3. Lesson: use this client instead github.com/Ensembl/ense... !
Also awesome for finding posters at a conference!
13.01.2026 13:02 — 👍 2 🔁 0 💬 0 📌 0Predicting cell state in previously unseen conditions has typically required retraining for each new biological context. Today, Arc is releasing Stack, a foundation model that learns to simulate cell state under novel conditions directly at inference time, no fine-tuning required.
09.01.2026 18:43 — 👍 11 🔁 6 💬 1 📌 1This. You can add a link in a reply to the post and avoid the penalty
12.01.2026 19:51 — 👍 0 🔁 0 💬 0 📌 0screenshot with a link alongside? Could draw those who want to participate in discussion where it started
12.01.2026 17:37 — 👍 1 🔁 0 💬 1 📌 0Introducing DroPE: Extending Context by Dropping Positional Embeddings
We found embeddings like RoPE aid training but bottleneck long-sequence generalization. Our solution’s simple: treat them as a temporary training scaffold, not a permanent necessity.
arxiv.org/abs/2512.12167
pub.sakana.ai/DroPE
One very familiar pattern in AI and science right now is going from a lot of false starts on hard tasks (there have been near-misses where AI appears to solve an Erdos problem but just finds an old solution no one knew about) to actually doing the thing soon after.
Three Erdos problems in 3 days.
We introduce epiplexity, a new measure of information that provides a foundation for how to select, generate, or transform data for learning systems. We have been working on this for almost 2 years, and I cannot contain my excitement! arxiv.org/abs/2601.03220 1/7
07.01.2026 17:27 — 👍 143 🔁 34 💬 9 📌 8Finished the essay. moultano.wordpress.com/2025/12/30/c...
30.12.2025 13:40 — 👍 142 🔁 30 💬 17 📌 23Autonomous RIVR delivery robots in Pittsburgh
24.12.2025 18:56 — 👍 201 🔁 45 💬 20 📌 39A quick sunday rewrite of an old blog post about how one should evaluate the effectiveness of an empirical paper:
open.substack.com/pub/emergere...
Good researchers obsess over evals
The story of Olmo 3 (post-training), told through evals
NeurIPS Talk tomorrow.
Upper Level Room 2, 10:35AM.
Slides: docs.google.com/presentation...
Elon’s power is that he offers a positive vision of the future. This attracts employees, funding, support. There’s a massive techno positive hole and he fills it.
17.11.2025 14:02 — 👍 58 🔁 5 💬 8 📌 11Active learning with DrugReflector beats SotA in phenotypic hit-rate for virtual screening. Includes a sc perturbation dataset with 10 lines and 104 compounds. Out in @science.org now!
Grateful to Cellarity and @fabiantheis.bsky.social for the opportunity to contribute to this outstanding project!
Three panel thing. In the left panel we use error bars. In the second, we take statistical significance as the biggest number but still have error bars. In LLM science, we just have the biggest number
What if we did a single run and declared victory
23.10.2025 02:28 — 👍 339 🔁 70 💬 13 📌 9Community notes when
13.10.2025 04:16 — 👍 41 🔁 5 💬 2 📌 1Yeah this is my biggest “AGI hype is not real” is that almost no one at these companies behaves like it’s real
11.10.2025 20:58 — 👍 15 🔁 2 💬 0 📌 0My skepticism of LLM-as-scientist stems from how imbalanced the literature is. Median paper is mildly negative result presented as positive, it's unclear how to RLHF on good hypothesis vs. bad hypothesis, etc. We barely know how to teach this skill, how can we RLHF it
28.09.2025 20:40 — 👍 123 🔁 10 💬 8 📌 2For folks considering grad school in ML, my advice is to explore programs that mix ML with a domain interest. ML programs are wildly oversubscribed while a lot of the fun right now is in figuring out what you can do with it
25.09.2025 03:25 — 👍 153 🔁 17 💬 8 📌 7A must-read before you jump on your first omics project - the top response here www.reddit.com/r/bioinforma...
28.08.2025 18:06 — 👍 3 🔁 0 💬 0 📌 0I think scientists thought people could tell apart the serious science from the bad fluff and ideological work that we all mostly ignore. We were not ready for people to start conflating all of them together
23.08.2025 18:20 — 👍 37 🔁 2 💬 7 📌 1The more rigorous peer review happens in conversations and reading groups after the paper is out with reputational costs for publishing bad work
17.08.2025 16:12 — 👍 49 🔁 5 💬 2 📌 3There are people, in tech (and now in the government!), who will mislead you about what current AI models are capable of. If we don't call them out, they'll drag us all down.
23.07.2025 20:01 — 👍 20 🔁 6 💬 3 📌 0