Konrad Kording's Avatar

Konrad Kording

@kordinglab.bsky.social

@Penn Prof, deep learning, brains, #causality, rigor, http://neuromatch.io, Transdisciplinary optimist, Dad, Loves outdoors, πŸ¦– , c4r.io

14,333 Followers  |  557 Following  |  1,152 Posts  |  Joined: 01.05.2023  |  1.9945

Latest posts by kordinglab.bsky.social on Bluesky

I feel that too!

10.02.2026 14:12 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Yeah. Neuroskyence is not having a moment

10.02.2026 13:45 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Left wing political activism is doing fine on bsky

10.02.2026 13:44 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I am of course happy to be proven wrong, but I find the framing of this preprint a bit frustrating. We gave similar feedback before, yet the manuscript doesn't seem to engage with the counter-evidence. I would appreciate clarification on the results discrepancy -- please feel free to update the PR!

10.02.2026 11:17 β€” πŸ‘ 9    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

In this paper. But neuroAI compares to text with LLMs, Vision with convnets, speech with RNNs, etc

09.02.2026 22:57 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Planyourscience.com now applauds the good changes you make - and pushes back on the less good one.

09.02.2026 20:43 β€” πŸ‘ 10    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

neuroAI comparisons of ANNs to brains do have a range of problems. Even more than I had realized. And I was worried before: www.biorxiv.org/content/10.1...

09.02.2026 14:13 β€” πŸ‘ 77    πŸ” 23    πŸ’¬ 6    πŸ“Œ 4

I agree that something is missing on twitter.

09.02.2026 14:07 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I would be curious about the experience of others on substack.

09.02.2026 14:00 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

In my observation its weirdly less the "will click on a post" rate but the "will follow up and actually read a paper" level.

09.02.2026 13:39 β€” πŸ‘ 8    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

Is Bluesky dying? For me it used to drive about as much engagement for maths science content as twitter. Now twitter is up by a factor of five.

09.02.2026 13:24 β€” πŸ‘ 13    πŸ” 0    πŸ’¬ 4    πŸ“Œ 0

To be clear neuroAI has a lot of problems we did not discuss. Eg it’s typically opaque if authors make a mechanistic or normative argument.

09.02.2026 11:35 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I guess. But key is that without you changing the world (eg by stimulating) you can never get there. And if you can he problem may still be impossible (there are sets of causal systems that produce exactly the same conditionals)

09.02.2026 11:33 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

No

08.02.2026 11:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Why would you say that? Eg if data is less than full rank your
Predictions can never be correct.

08.02.2026 02:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

But I think I am saying a bit more. In high-d spaces a heavytailed or 1/f SV spectrum makes ML easy and CI hard...

07.02.2026 18:41 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

great example!

07.02.2026 18:40 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Forward vs Inverse problems: why high performance machine learning usually means little about how the world works Understanding causality from machine learning is unfortunately usually impossible; life sciences take note

Well-predicting machine learning in no way means that you can understand how the world works.
open.substack.com/pub/kording/...

07.02.2026 18:31 β€” πŸ‘ 51    πŸ” 8    πŸ’¬ 3    πŸ“Œ 2
Video thumbnail

Why don’t neural networks learn all at once, but instead progress from simple to complex solutions? And what does β€œsimple” even mean across different neural network architectures?

Sharing our new paper @iclr_conf led by Yedi Zhang with Peter Latham

arxiv.org/abs/2512.20607

03.02.2026 16:19 β€” πŸ‘ 151    πŸ” 41    πŸ’¬ 7    πŸ“Œ 3
Post image Post image Post image Post image

Reproducibility and open science are great, but they don't necessarily equate to rigor. You can perfectly share a study and still draw weak conclusions. True rigor lives in the questions we ask, the designs we choose, and the inferences we make.

02.02.2026 17:15 β€” πŸ‘ 21    πŸ” 11    πŸ’¬ 1    πŸ“Œ 1

Without looking it up, what kind of data could realistically support a statement like "This reveals that the human brain may be simpler than previously thought, with potential implications for human cognition and disease."

02.02.2026 01:37 β€” πŸ‘ 15    πŸ” 1    πŸ’¬ 11    πŸ“Œ 1

sign me up for it

30.01.2026 04:44 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I cowrote a paper with someone from the other side about exactly this: journals.humankinetics.com/view/journal...

29.01.2026 19:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Basically this whole discussion has nothing to do with the logic that drives the "internal model" field. Its a misunderstanding of what they mean. To them its just that the nervous system has (explicitly or rather implicitly) knowledge of the dynamics of the world. Full stop.

29.01.2026 18:14 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Look this is the misunderstanding between fields. When those guys talk about internal models they don't understand the actual field "Internal models require homuncular interpreters, creating infinite regress problems; " -- this is the opposite of what the key figures in field believe!

29.01.2026 18:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Ok. The word "internal" to you implies that it is in internal coordinates and you disagree with that part? The "internal models" field does not actually mean that...

29.01.2026 16:27 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Also for history, there is a completely fruitless misunderstanding that plagued the field of motor control for 2 decades and produced a pointless "internal model" vs "equilibrium control" contrast. The brain does both. Not mutually exclusive.

29.01.2026 16:10 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Ok. We must differ in definition of internal model. My definition is "the ability to simulate the world". Uses are prediction, predictive control, planning, practice in simulated world. There is no implication that it needs to be explict. What do you mean by internal model?

29.01.2026 16:07 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 3    πŸ“Œ 0

Ok. can you imagine throwing a basketball at a hoop? If you can then internal models exist no?

29.01.2026 15:19 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

So the AI in my app will make a v0 of outlines. It will make a v0 of the draft. It will never make ideas. And it will tell you if there is something problematic in your ideas.

29.01.2026 14:21 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@kordinglab is following 20 prominent accounts