Scientists are always doing what's interesting to them, but I think that's the wrong approach. We should be going after all the stuff that bores us. Because it's actually not boring at all once you get into it, and it's precisely the things we think are going to be boring that open our minds.
I tracked every keyword in 22 years of Cosyne abstracts to map how computational neuroscience evolved — from Bayesian brains to neural manifolds to LLMs — and where it's heading next.
The field of #neuromorphics is lacking *accessible*, *intuitive*, and *practical* introductions. Ramashish Gaurav, Petruț Antoniu Bogdan, and I are setting out to fix this with a book on Practical Spiking Neural Networks! ✅
Any and all contributions are welcome! 💕
Early access at: snnbook.net
One of the underrated papers this year:
"Small Batch Size Training for Language Models:
When Vanilla SGD Works, and Why Gradient Accumulation Is Wasteful" (arxiv.org/abs/2507.07101)
(I can confirm this holds for RLVR, too! I have some experiments to share soon.)
Very neat approach to mapping biological circuits to neuromorphic HW!
Had the pleasure of providing some feedback to Suraj Honnuraiah on an earlier draft of the paper, great to see it finally in print! (I actually missed that it already came out in Oct)
Link to paper:
www.pnas.org/doi/10.1073/...
How does the structure of a neural circuit shape its function?
@neuralreckoning.bsky.social & I explore this in our new preprint:
doi.org/10.1101/2025...
🤖🧠🧪
🧵1/9
With Masashi launching his new lab, we’ll be recruiting a new postdoc in the Oldenburg Lab.
Work: high-precision multiphoton holography, neural coding, motor cortex circuits, all-optical physiology.
If you’re interested, just reach out.
1/6 New preprint 🚀 How does the cortex learn to represent things and how they move without reconstructing sensory stimuli? We developed a circuit-centric recurrent predictive learning (RPL) model based on JEPAs.
🔗 doi.org/10.1101/2025...
Led by @atenagm.bsky.social @mshalvagal.bsky.social
When you join a lab to do some voltage imaging
I am still able to login fine, can try submitting an error/bug report on your behalf
Maybe it's 'just' a question of figuring out how few neurons can you nudge (and which ones) to get max desynch. Anyway, definitely sound like a cool modelling problem! 😅
Interesting, how would one actually do this? (either the isochron or the lyapunov) E.g optogenetics seems more likely to induce synchrony
whereas more global LFP (~0.1-0.5mV amplitude) would have a much smaller direct effect (but maybe still meaningful). Much harder to demonstrate such causal effects in mammals, since manipulating single neurons generally doesn't affect behavior like it does in drosophila
Yes, thanks for sharing @stevenflorek.bsky.social! This is a pretty convincing example of eph coupling having causal role (drosophila ppl always have the coolest results!). My guess is that the key variable here is distance, e.g. many neurons might have such effect on their immediate neighbors
E.g. you could argue that hippocampal theta-sweeps would be completely messed up if you jitter the spikes just a little, but the underlying place field would be essentially the same
Yes, it's always a question of temp resolution. Saying "spike time matters" implies that that jittering spikes by ~1-2 ms would meaningfully change the computation. "Rate code" implies that computation would only be meaningfully affected by shifting the spike times a much larger amount (eg >10ms)
Since this is clearly one of your favorite topics, would be able to point to 1-2 papers / key results that you find to be the most compelling piece(s) of evidence?
Lots of insignificant weak things can be measured, showing that something matters causally is extremely difficult. The claim that spike timing matters broadly is only moderately controversial. The claim that ephaptic coupling is causal in some larger circuit computation is much more controversial
💯, I'm confused about what people are actually trying to claim here. Oscillations are important in the sense that *spike timing* matters. I think there is a good amount of data backing that up (e.g. HPC theta). This has nothing to do with any direct effect of the (extremely weak) LFP electric field
supportive mentor I could have wished for, and who gave me the time and space to learn and grow, and the intellectual freedom explore my ideas (and the occasional rabbit hole). Will miss all the wonderful colleagues I am leaving behind. Now on to new adventures, see you all back in Europe!
Feel incredibly fortunate to have worked alongside @neurosutras.bsky.social over the last few years and proud of all that we achieved. I'm very grateful to all the people who supported my many fellowship applications over the years.
Especially grateful to Aaron, who has been the most
Last week was my last at Rutgers University. After nearly 5 years in the US, I am moving to the Netherlands to join Innatera, a neuromorphic computing startup pushing the boundaries of what we can do with ultra-efficient hardware running SNNs for edge computing.
There was never any point to having reference letters. That's why we've all started using AI to do this nonesense task.
References should only be used for short-listed candidates for important positions/awards, and ideally, be done via a call to get the most honest opinion possible.
Very cool! Maybe it's just my bad intuition, but I find it surprising that weights can tolerate more extreme quantisation that delays
Psst - neuromorphic folks. Did you know that you can solve the SHD dataset with 90% accuracy using only 22 kb of parameter memory by quantising weights and delays? Check out our preprint with @pengfei-sun.bsky.social and @danakarca.bsky.social, or read the TLDR below. 👇🤖🧠🧪 arxiv.org/abs/2510.27434
With my great advisors and colleagues, @achterbrain.bsky.social @zhe @danakarca.bsky.social @neural-reckoning.org, we show that if heterogeneous axonal delays (imprecise) can capture the essential temporal structure of a task, spiking networks do not need precise synaptic weights to perform well.
It's been a pleasure and a privilege, going to really miss working with you and with the rest of the lab!
Really enjoyed reading this short opinion piece by Tim O'Leary. I think it echoes the classic Feynman quote "what I cannot create I do not understand". I think engineering approaches such as neuromorphic computing will prove fundamental to scientific understanding of how biological brains work
Reminder this is happening this Wed/Thu. Free spiking neural network conference - registration required (see below).
My co-authors have yet to move to Bluesky, so I'm pleased to announce our latest work has just been published in @nature.com Neuroscience. Amazing work led by Junheng Li, revealing that falling asleep follows a predictable bifurcation pattern #neuroskyence #sleep
www.nature.com/articles/s41...