Dear universities,
I am begging you to stop requiring letters of recommendation for master's programmes. You and I both know you don't read them, so stop asking for them.
Instead, have applicants list a name and get in touch if it's a borderline case.
Signed,
Everyone.
28.07.2025 18:28 β π 533 π 85 π¬ 16 π 24
Also, even if the theory is wrong/overly simplistic, putting it in terms of concrete physiological variables could at least still be useful for coming up with ideas on which non-obvious experiments to do next. Might find evidence for *one* role of cell type X, even if not the only role
28.07.2025 15:51 β π 3 π 0 π¬ 1 π 0
Beyond Rate Coding: Surrogate Gradients Enable Spike Timing Learning in Spiking Neural Networks
We investigate the extent to which Spiking Neural Networks (SNNs) trained with Surrogate Gradient Descent (Surrogate GD), with and without delay learning, can learn from precise spike timing beyond fi...
New preprint for #neuromorphic and #SpikingNeuralNetwork folk (with @pengfei-sun.bsky.social).
arxiv.org/abs/2507.16043
Surrogate gradients are popular for training SNNs, but some worry whether they really learn complex temporal spike codes. TLDR: we tested this, and yes they can! π§΅π
π€π§ π§ͺ
24.07.2025 17:03 β π 40 π 16 π¬ 1 π 1
Exciting new preprint from the lab: βAdopting a human developmental visual diet yields robust, shape-based AI visionβ. A most wonderful case where brain inspiration massively improved AI solutions.
Work with @zejinlu.bsky.social @sushrutthorat.bsky.social and Radek Cichy
arxiv.org/abs/2507.03168
08.07.2025 13:03 β π 124 π 55 π¬ 3 π 10
Definitely, with these short message formats words often come across as harsher/more direct than we intend. The tone of voice sounds friendly in our heads, but doesn't always read that way
11.07.2025 21:58 β π 2 π 0 π¬ 1 π 0
(these bsky/xitter debates seem to devolve into semantics suspiciously often :P)
11.07.2025 16:03 β π 2 π 0 π¬ 1 π 0
Aside from the needless aggression, I've found reading this conversation quite enlightening :) Didn't realize there was such ambiguity around the term "mean-field". Personally I associate it more with the single-neuron averaging (~=ANN), but I think the population averaging view is equally valid
11.07.2025 16:03 β π 3 π 0 π¬ 1 π 0
I was thinking something even more basic, like the work that has been done on how individual receptive fields change when animals are reared in strange environments
11.07.2025 13:44 β π 2 π 0 π¬ 1 π 0
The closest I can think of is something like the efficient coding hypothesis, where we can make a theory about representation that would be optimal for encoding sensory info in a given environment, and predict how this should change when sensory statistics are altered
11.07.2025 13:26 β π 1 π 0 π¬ 1 π 0
Iβm reminded of one of my favorite quotes from Eve Marder: "The process of turning a word model into a formal mathematical model invariably forces the experimentalist to confront his or her hidden assumptions."
The point isnβt to capture all the richness, itβs to put your money where your mouth is.
07.07.2025 11:35 β π 66 π 15 π¬ 3 π 1
(1/7) New preprint from Rajan lab! π§ π€
@ryanpaulbadman1.bsky.social & Riley Simmons-Edler showβthrough cog sci, neuro & ethologyβhow an AI agent with fewer βneuronsβ than an insect can forage, find safety & dodge predators in a virtual world. Here's what we built
Preprint: arxiv.org/pdf/2506.06981
02.07.2025 18:33 β π 88 π 31 π¬ 3 π 2
But in this example, wouldn't you say that this neuron doesn't actually represent either fish or faces, but rather it represents some feature that is present in both? So it does have a representation, just not one that can be simplistically described by a single word
05.06.2025 19:52 β π 2 π 0 π¬ 1 π 0
In proportion to the number of people that make use of the dataset. If it drives lots of follow-on science, then we should continue investing to collect such datasets. If we hear crickets, then maybe wait before collecting the next connectome.
04.06.2025 23:06 β π 3 π 0 π¬ 0 π 0
But I generally agree that a blind copy/paste approach won't get us very far. I still think it's a useful dataset to have, and gathering more connectome data is a good thing. Copying every detail is probably a bad strategy, but copying *some* might prove useful, we shouldn't dismiss prematurely
04.06.2025 17:25 β π 2 π 0 π¬ 1 π 0
Useful to understanding the brain, even if not useful to building better LLMs. Might be useful for "AI" approaches that deviate from the current mainstream (e.g. neuromorphic computing), too early to say. Strong claims that something will never work tend to not age very well
04.06.2025 17:10 β π 1 π 0 π¬ 1 π 0
Obviously we can't just plug-and-play and expect to outperform an LLM. But this take seems overly dismissive of a promising approach that is only just starting to be explored. Bio architectures may yet prove useful for select applications (e.g. where we have specific hardware constraints)
04.06.2025 14:34 β π 5 π 0 π¬ 1 π 0
I find the arguments somewhat unconvincing. To start, advancing AI is not the only goal. Second, just because there is degeneracy and we don't know everything about every molecular interaction doesn't mean that knowing the general architecture is not useful and important first step.
04.06.2025 14:34 β π 2 π 0 π¬ 1 π 0
And yet most humanoid robots are pretty bad at basic movements. I suspect that when it comes to interacting with the physical world, the hardware/energy/latency constraints are so different from what DL/GPUs are optimized for that bio-architectures and neuromorphic hardware might prove more useful
04.06.2025 14:14 β π 3 π 0 π¬ 0 π 0
I'm curious if you think the same applies to robotics, where DL progress has been more limited. E.g. I found recent work from @neurograce.bsky.social's lab quite inspiring and seems like a good counter-example where the connectome alone gives a useful architecture for learning relevant movements
04.06.2025 14:00 β π 1 π 0 π¬ 1 π 0
The recent work I presented at Cosyne this year is now out on bioRxiv! Combining bio-plausible credit assignment using (apical) dendrites, realistic E/I cell types, and BTSP-based synaptic plasticity. One small step closer to understanding learning in the brain! Check out the summary here π
29.05.2025 00:59 β π 9 π 0 π¬ 0 π 0
Motor learning refines thalamic influence on motor cortex - Nature
Imaging and optogenetics in mice provide insight into the interplay between the primary motor cortex and the motor thalamus during learning, showing that thalamic inputs have a key role in the executi...
Our paper is out in Nature.
By examining various inputs to the motor cortex during learning, we found that thalamic inputs learn to activate the cortical neurons encoding the movement being learned.
Tour de force by Assaf in collab with Felix and Marcus. Congrats!
www.nature.com/articles/s41...
08.05.2025 00:12 β π 161 π 43 π¬ 3 π 0
It can be a bit intimidating the first time you go (especially if you don't know many people in the field), but during the conference itself I also met loads of people who were really interested in how cellular properties can affect systems-level function.
06.05.2025 20:08 β π 3 π 0 π¬ 1 π 0
I highly recommend giving it a shot. In 2018 I had a poster accepted that was heavily cellular (dendrites, single-cell biophysical modelling, slice patch-clamp experiments). In my application I just emphasised the systems/computational relevance. If you DM me I'm happy to share my old submission
06.05.2025 19:54 β π 3 π 0 π¬ 1 π 0
This is an amazing set of experiments, we're clearly living in the golden age of connectomics and structure/function mapping. Congrats to the whole team!
01.05.2025 15:50 β π 1 π 0 π¬ 0 π 0
βMy job is to carve off a sliver of the ineffable, and to eff it.β
www.experimental-history.com/p/28-slightl...
30.04.2025 12:07 β π 9 π 6 π¬ 1 π 0
Not too long ago, we were asked when we're going to replace Wikipedia's human-curated knowledge with AI.
The answer? We're not.
The community of volunteers behind Wikipedia is the most important and unique element of Wikipediaβs success. For nearly 25 years, Wikipedia editors have researched, deliberated, discussed, built consensus, and collaboratively written the largest encyclopedia humankind has ever seen. Their care and commitment to reliable encyclopedic knowledge is something AI cannot replace.
That is why our new AI strategy doubles down on the volunteers behind Wikipedia.
We will use AI to build features that remove technical barriers to allow the humans at the core of Wikipedia to spend their valuable time on what they want to accomplish, and not on how to technically achieve it. Our investments will be focused on specific areas where generative AI excels, all in the service of creating unique opportunities that will boost Wikipediaβs volunteers:
Wikimedia has a new AI strategy!
A colleague and I spent months working on it. I am so happy that it is out. wikimediafoundation.org/news/2025/04...
30.04.2025 13:57 β π 201 π 58 π¬ 5 π 8
Neuromorphic Questionnaire
This form collects valuable information from the Neuromorphic Community as part of a project led by Matteo Saponati, Laura Kriener, Sebastian Billaudelle, Filippo Moro, and Melika Payvand. The goal is...
Take our short 5-min anonymous survey on the Neuromorphic fieldβs current state & future:
π tinyurl.com/3jkszrnr
ποΈ Open until May 12, 2025
Results will be shared openly and submitted for publication. Your input will help us understand howΒ interdisciplinary trendsΒ are shaping the field.
16.04.2025 10:27 β π 8 π 10 π¬ 1 π 0
And just today the EIC of Nature Neuroscience told me in public that they are a for-profit company and if they had to pay reviewers they would "need" to raise prices.
24.04.2025 05:58 β π 22 π 4 π¬ 2 π 1
I am teaching my phd writing workshop course this quarter, question: are there any words/phrases said to you by an advisor/mentor that stuck with you, were memorable, or particularly helpful? If so please reply below!
23.04.2025 19:45 β π 658 π 149 π¬ 347 π 65
Computational Neuroscience PhD Student
Iβm a postdoc @Imperial working with @danakarca.bsky.social and @neuralreckoning.bsky.social. Iβm passionate about brainβinspired neural networks, focusing on delay learning in RNNs (spiking/rate) and lightweight attention mechanisms.
Biomedical AI PhD at the University of Edinburgh, working on #NeuroAI & #ML4Health. https://bryanli.io.
Scientist, Professor
Passionate advocate for responsible and humane research involving animals.
#ThankAMonkey #EndSufferingThruScience
Views are my own.
A career network featuring science jobs in academia and industry.
Visit our platform at www.science.hr
Solve climate change or die trying.
Full videos on YouTubeπ
https://linktr.ee/climatetown
Spinal cord and motor control scientist; PI at NINDS
NeuroAI Scholar @ CSHL
https://darsnack.github.io β¨
Previously maintaining FluxML to procrastinateβ¨β¨Previously EE PhD at UW-Madison, comp. eng. / math at Rose-Hulman
PostDoc at the Kavli Institute for Systems Neuroscience at NTNU Trondheim. Whitman Scientist at the Marine Biological Laboratory (MBL) in Woods Hole, MA. I am studying sleep in octopuses and cuttlefish. π
predoc @NERF (KU Leuven, VIB, imec)
comp neuro and machine learning
goncalveslab.sites.vib.be/en
Computational Cognitive Neuroscientist at CiNet & Osaka University. Category learning to concepts & everything between (semantic/episodic memory). Cognitive aging/damage in models & brains. To understand the brain & AI.
Postdoc in Computational Neuroscience, PhD in Robotics from the University of Washington. Musician/composer. I love anything brains and music theory. He/him. https://jisacks.github.io/
Asst. Prof. Of Neuroscience at Rutgers University- Newark.
husband, father, neuroscientist, senior group leader at Janelia @hhmijanelia.bsky.social, beard enthusiast, unrepentant dilettante, he|him
www.dudmanlab.org
https://orcid.org/0000-0002-4436-1057
Associate Professor of Physics, research in computational neuroscience, brain inspired AI, and physics. Enjoy gazing the night sky and playing music. #neuroscience #physics #astrophotography #music
PhD student in Taiwan. He/him.
Neuroscientist / Federal Center of Neurosurgery
https://scholar.google.com/citations?hl=en&user=FHrf6KAAAAAJ&view_op=list_works&sortby=pubdate
Theor/Comp. Neuroscientist (postdoc)
Prev: @TU Munich
Stochastic&nonlinear dynamics @TU Berlin & @MPIDS
Learning dynamics, plasticity, and geometry of representations
https://dimitra-maoutsa.github.io
https://dimitra-maoutsa.github.io/M-Dims-Blog
Comp neuro @ Champalimaud