Oops I did the thing π¬π€¦ββοΈπ€¦ββοΈπ€¦ββοΈ
en.wikipedia.org/wiki/Brand_b...
@dlevenstein.bsky.social
Neuroscientist, in theory. Studying sleep and navigation in π§ s and π»s. Wu Tsai Investigator, Assistant Professor of Neuroscience at Yale. An emergent property of a few billion neurons, their interactions with each other and the world over ~1 century.
Oops I did the thing π¬π€¦ββοΈπ€¦ββοΈπ€¦ββοΈ
en.wikipedia.org/wiki/Brand_b...
This seems to fly in the face of βthou shall not assume causation from correlationβ and βthou shall not assume function from formβ
From what I can tell, the argument is that an adaptive system (evo/bio/neuro) will learn to use any knob available, so if we see a knob we should assume itβs used?
But the good news is that I finally finished the final revisions on the last hangover paper from my PhD and it only took 4 years so I guess you can say Today Was The Day I Finally Became Doctor?
07.12.2025 03:37 β π 14 π 0 π¬ 2 π 0Some really good tips here. Wish I had learned number 3 earlier, or ever for that matter π₯²
www.reddit.com/r/GradSchool...
Really enjoyed this conversation! Makes me think we should find a way to finish our GAC paper.... @repromancer.bsky.social @dlbarack.bsky.social ;)
05.12.2025 15:55 β π 3 π 0 π¬ 1 π 0In which @kordinglab.bsky.social argues LLMs are more like an electric motor than a drill, and starts to build a drill for scientific research.
open.substack.com/pub/kording/...
ML papers need to come with a pronunciation guideβ¦. for reproducibility.
04.12.2025 12:54 β π 1 π 0 π¬ 0 π 0Yes very cool but how do we pronounce it?
βMuppyβ or βMoopyβ?
Sad to miss it. Pls report back the good stuff π
02.12.2025 19:27 β π 2 π 0 π¬ 0 π 00/10 Thanks for the interest in our preprint. Some takes say it negates or fully supports the βmanifold hypothesisβ, neither quite right. Our results show that if you only focus on the manifold capturing most of task-related variance, you could miss important dynamics that actually drive behavior.
02.12.2025 07:48 β π 48 π 22 π¬ 1 π 1Iβm with you - the ability to train neural models (with interpretable mapping to cells, circuits, and regions) to perform complex tasks in rich environments is new* and exciting and weβre just scratching the surface.
*someone can always point out 1-2 pioneers before their time π
For a while it felt like bsky was recapturing the prof involvement of science twitter but not the students (the former PhDs/postdocs of science twitter are the new-profs of bsky π
).
Lately it feels like a lot more students getting on the bus π§ͺπ¦π πππ
#neuroskyence
TFW thereβs a whole new podcast about Fela Kuti for the ride home π€©π
open.spotify.com/episode/203S...
Sometimes I think the best we can hope for is productively wrong π
29.11.2025 23:43 β π 8 π 0 π¬ 1 π 0Where do you think this is *not* the case? (i.e. what parts of theoretical neuro today are truly new?)
Where do you think this will not be the case in 20 years? (i.e if research progresses in a direction you think it should, what will be the new stuff weβre not talking about today?)
I for one really enjoyed your recent and groundbreaking paper on Cross-Task Sharing of Mid-Level Features Predicts Perceptual Learning Transfer and would love to do a PhD with youβ¦
28.11.2025 21:17 β π 10 π 0 π¬ 1 π 0The basal forebrain plays the cortex like a piano.
28.11.2025 17:51 β π 67 π 12 π¬ 2 π 1around 12 years ago I had the good fortune to meet Massimo Scanziani and told him about my ongoing postdoc project that I was preparing a manuscript on, his first question was "what is the central message of your paper?"
That one prompt changed the way I write papers forever
Oh man. Science Neural Circuits would be my new favorite journal.
26.11.2025 20:02 β π 10 π 2 π¬ 2 π 0Thereβs an interesting parallel here to neural network interpretabilityβ¦
Understanding the recipe is not the same as knowing how the cake tastes at inference.
I am really proud that eLife have published this paper. It is a very nice paper, but you need to also read the reviews to understand why! 1/n
25.11.2025 20:34 β π 77 π 12 π¬ 2 π 4Yβall are reading this paper in the wrong way.
We love to trash dominant hypothesis, but we need to look for evidence against the manifold hypothesis elsewhere:
This elegant work doesn't show neural dynamics are high D, nor that we should stop using PCA
Itβs quite the opposite!
(thread)
congrats Ann! you're killing it :D
24.11.2025 19:29 β π 2 π 0 π¬ 1 π 0Ahhh this is great. I remember @repromancer.bsky.social taking about this when he was working on his EG paper: www.biorxiv.org/content/10.1...
Did not realize it came from Amari
Ahh good to know re: reproducibility.
Jascha actually came and presented the delay learning stuff here a few weeks ago, super cool! Made me think itβs time to get into training some spiking networks π
There are a small number of papers that I still think about regularly 10 years after reading them and this is one of them.
24.11.2025 13:54 β π 10 π 1 π¬ 2 π 0Iβve also always wondered how all these Dβs relate - coding, communication, fractal, movement-related π΅βπ«
An interesting perspective on this from @lukesjulson.bsky.social and @eliezyer.bsky.social:
www.biorxiv.org/content/10.1...
1) This is a great idea.
2) I would be happy to mentor a project like this. If you want to do a project eg at the intersection of Philosophy of Science and NeuroAI, please feel free to reach out to discuss, or just put my name as a suggested mentor!
Could you recommend an βAmari to the rescueβ paper to start with? Have been meaning to dig into his work.
23.11.2025 17:11 β π 3 π 0 π¬ 2 π 0During my PhD, I remember someone saying something to the effect of βreally easy to wake an animal up with neural stimulation, really hard to put them to sleepβ.
That was until Yang Danβs lab took up the challenge π