π€π NEW in the Deeper Learning blog: @annhuang42.bsky.social & @kanakarajanphd.bsky.social break down their recent work examining how #RNNs solve the same task in different ways, and why that matters. Joint work with @satpreetsingh.bsky.social & @flavioh.bsky.social bit.ly/4kj4fVd #NeuroAI
[bonus] Here's a function that two neurons in a channel can implement
More interesting details can be found in the paper: arxiv.org/abs/2506.14951
Or come by our poster if at Neurips (Session 3, poster #4200)
Wonderful team with Alex Van Meegen @avm.bsky.social, Berfin Simsek, Wulfram Gerstner @gerstnerlab.bsky.social and Johanni Brea
But what happens with standard gradient descent?
Channels to infinity get sharper with O(Ξ³^2), this is a clear example of the edge of stability phenomenon:
gradient descent does not converge to a minimum (at infinity) but gets stuck where the sharpness of the channel is 2/Ξ· (Ξ·: learning rate)
These channels are surprisingly common in MLPs, we find them to be a significant proportion of all minima reached in our training runs
But they can only be spotted by training for a long time, by following the gradient flow with ODE solvers
But what do these pairs of neurons compute?
In the limit of Ξ³ββ and Ξ΅β0 (where Ξ΅ is the distance of the two neurons input weights) they compute a directional derivative!
The MLP is learning to implement a Gated Linear Unit, with a non-linearity that is the derivative of the original
Hereβs some more pictures from different angles
When perturbing networks from their saddle points, gradient trajectories get stuck in nearby channels that run parallel to the saddle line
The gradient dynamics are simple: after a first phase of alignment, trajectories are straight and Ξ³ββ
These channels are parallel to lines of saddle points arising from permutation symmetries, as described by Fukumizu & Amari in 2000
Saddles can be formed by taking a network at a local minimum and splitting a neuron's contribution into two, with splitting factor Ξ³
π§΅Excited to present our latest work at #Neurips25! Together with @avm.bsky.social, we discover ππ‘ππ§π§ππ₯π¬ ππ¨ π’π§ππ’π§π’ππ²: regions in neural networks loss landscapes where parameters diverge to infinity (in regression settings!)
We find that MLPs in these channels can take derivatives and compute GLUs π€―
πExcited to share that our paper was selected as a Spotlight at #NeurIPS2025!
arxiv.org/pdf/2410.03972
It started from a question I kept running into:
When do RNNs trained on the same task converge/diverge in their solutions?
π§΅β¬οΈ
Exciting news for #drosophila #connectomics and #neuroscience enthusiasts: the Drosophila male central nervous system connectome is now live for exploration. Find out more at the landing page hosted by our Janelia FlyEM collaborators www.janelia.org/project-team....
Lab members are at the Bernstein conference @bernsteinneuro.bsky.social with 9 posters! Hereβs the list:
TUESDAY 16:30 β 18:00
P1 62 βMeasuring and controlling solution degeneracy across task-trained recurrent neural networksβ by @flavioh.bsky.social
To our fellow researchers at Harvard and elsewhere. π§ͺπ§
I have funds for visiting PhDs or postdocs at TU in Vienna. For short stay or full PhD email me.
For professors, check for instance, this tenure track opening or ask in private for options
informatics.tuwien.ac.at/news/2909
Isn't NeuroAI a modern rebranding of computational neuroscience?
My take is that NeuroAI just sounds a little broader as a term, incorporating cognition and behaviour in the picture (that were not so accurately modelled before ANNs).
To me the goals of compneuro and NeuroAI are fully overlapping.