I feel that too!
10.02.2026 14:12 β π 1 π 0 π¬ 1 π 0@kordinglab.bsky.social
@Penn Prof, deep learning, brains, #causality, rigor, http://neuromatch.io, Transdisciplinary optimist, Dad, Loves outdoors, π¦ , c4r.io
I feel that too!
10.02.2026 14:12 β π 1 π 0 π¬ 1 π 0Yeah. Neuroskyence is not having a moment
10.02.2026 13:45 β π 2 π 0 π¬ 1 π 0Left wing political activism is doing fine on bsky
10.02.2026 13:44 β π 1 π 0 π¬ 0 π 0I am of course happy to be proven wrong, but I find the framing of this preprint a bit frustrating. We gave similar feedback before, yet the manuscript doesn't seem to engage with the counter-evidence. I would appreciate clarification on the results discrepancy -- please feel free to update the PR!
10.02.2026 11:17 β π 9 π 1 π¬ 1 π 0In this paper. But neuroAI compares to text with LLMs, Vision with convnets, speech with RNNs, etc
09.02.2026 22:57 β π 0 π 0 π¬ 0 π 0Planyourscience.com now applauds the good changes you make - and pushes back on the less good one.
09.02.2026 20:43 β π 10 π 2 π¬ 0 π 0neuroAI comparisons of ANNs to brains do have a range of problems. Even more than I had realized. And I was worried before: www.biorxiv.org/content/10.1...
09.02.2026 14:13 β π 77 π 23 π¬ 6 π 4I agree that something is missing on twitter.
09.02.2026 14:07 β π 1 π 0 π¬ 0 π 0I would be curious about the experience of others on substack.
09.02.2026 14:00 β π 1 π 0 π¬ 1 π 0In my observation its weirdly less the "will click on a post" rate but the "will follow up and actually read a paper" level.
09.02.2026 13:39 β π 8 π 0 π¬ 2 π 0Is Bluesky dying? For me it used to drive about as much engagement for maths science content as twitter. Now twitter is up by a factor of five.
09.02.2026 13:24 β π 13 π 0 π¬ 4 π 0To be clear neuroAI has a lot of problems we did not discuss. Eg itβs typically opaque if authors make a mechanistic or normative argument.
09.02.2026 11:35 β π 2 π 0 π¬ 0 π 0I guess. But key is that without you changing the world (eg by stimulating) you can never get there. And if you can he problem may still be impossible (there are sets of causal systems that produce exactly the same conditionals)
09.02.2026 11:33 β π 0 π 0 π¬ 1 π 0No
08.02.2026 11:13 β π 0 π 0 π¬ 0 π 0Why would you say that? Eg if data is less than full rank your
Predictions can never be correct.
But I think I am saying a bit more. In high-d spaces a heavytailed or 1/f SV spectrum makes ML easy and CI hard...
07.02.2026 18:41 β π 2 π 0 π¬ 0 π 0great example!
07.02.2026 18:40 β π 0 π 0 π¬ 1 π 0Well-predicting machine learning in no way means that you can understand how the world works.
open.substack.com/pub/kording/...
Why donβt neural networks learn all at once, but instead progress from simple to complex solutions? And what does βsimpleβ even mean across different neural network architectures?
Sharing our new paper @iclr_conf led by Yedi Zhang with Peter Latham
arxiv.org/abs/2512.20607
Reproducibility and open science are great, but they don't necessarily equate to rigor. You can perfectly share a study and still draw weak conclusions. True rigor lives in the questions we ask, the designs we choose, and the inferences we make.
02.02.2026 17:15 β π 21 π 11 π¬ 1 π 1Without looking it up, what kind of data could realistically support a statement like "This reveals that the human brain may be simpler than previously thought, with potential implications for human cognition and disease."
02.02.2026 01:37 β π 15 π 1 π¬ 11 π 1sign me up for it
30.01.2026 04:44 β π 3 π 0 π¬ 0 π 0I cowrote a paper with someone from the other side about exactly this: journals.humankinetics.com/view/journal...
29.01.2026 19:11 β π 0 π 0 π¬ 0 π 0Basically this whole discussion has nothing to do with the logic that drives the "internal model" field. Its a misunderstanding of what they mean. To them its just that the nervous system has (explicitly or rather implicitly) knowledge of the dynamics of the world. Full stop.
29.01.2026 18:14 β π 0 π 0 π¬ 1 π 0Look this is the misunderstanding between fields. When those guys talk about internal models they don't understand the actual field "Internal models require homuncular interpreters, creating infinite regress problems; " -- this is the opposite of what the key figures in field believe!
29.01.2026 18:13 β π 0 π 0 π¬ 0 π 0Ok. The word "internal" to you implies that it is in internal coordinates and you disagree with that part? The "internal models" field does not actually mean that...
29.01.2026 16:27 β π 0 π 0 π¬ 1 π 0Also for history, there is a completely fruitless misunderstanding that plagued the field of motor control for 2 decades and produced a pointless "internal model" vs "equilibrium control" contrast. The brain does both. Not mutually exclusive.
29.01.2026 16:10 β π 0 π 0 π¬ 0 π 0Ok. We must differ in definition of internal model. My definition is "the ability to simulate the world". Uses are prediction, predictive control, planning, practice in simulated world. There is no implication that it needs to be explict. What do you mean by internal model?
29.01.2026 16:07 β π 2 π 0 π¬ 3 π 0Ok. can you imagine throwing a basketball at a hoop? If you can then internal models exist no?
29.01.2026 15:19 β π 0 π 0 π¬ 1 π 0So the AI in my app will make a v0 of outlines. It will make a v0 of the draft. It will never make ideas. And it will tell you if there is something problematic in your ideas.
29.01.2026 14:21 β π 2 π 0 π¬ 0 π 0