We're hiring 5 T/TT faculty in Neuroscience, including Computational Neuroscience, at U Notre Dame
We'll start reviewing applications very soon, so if you're thinking about applying, please apply now/soon!
apply.interfolio.com/173031
@robertrosenbaum.bsky.social
Associate Professor of Applied and Computational Mathematics and Statistics and Biological Sciences at U Notre Dame. Theoretically a Neuroscientist.
We're hiring 5 T/TT faculty in Neuroscience, including Computational Neuroscience, at U Notre Dame
We'll start reviewing applications very soon, so if you're thinking about applying, please apply now/soon!
apply.interfolio.com/173031
I'm posting this again for anyone who might have missed it last time:
Notre Dame is hiring 5 tenure or tenure-track professors in Neuroscience, including Computational Neuroscience, across 4 departments.
Feel free to reach out with any questions.
And please share!
apply.interfolio.com/173031
The University of Notre Dame is hiring 5 tenure or tenure-track professors in Neuroscience, including Computational Neuroscience, across 4 departments.
Come join me at ND! Feel free to reach out with any questions.
And please share!
apply.interfolio.com/173031
Fixed! (I think) Try again and let me know if you still have trouble. You might need to refresh the page.
03.06.2025 12:39 β π 0 π 0 π¬ 0 π 0Thank you for the feedback, I'll work on both of those!
03.06.2025 12:24 β π 0 π 0 π¬ 0 π 0Thanks for the suggestion, that makes sense. I am just trying to figure out the best implementation. It's difficult (for me) to combine email verification and profile creation on the same page. Maybe a link to a screenshot of an example profile on the registration page?
02.06.2025 17:25 β π 0 π 0 π¬ 0 π 0Most universities have generous "Conflict of Commitment" policies that allow faculty to devote a portion of their time to consulting work, but these policies are under-utilized.
Consulting work can provide valuable industry experience, and also extra cash.
Couldn't the same argument be made for conference presentations (which 90% of the time only describe published work)?
20.05.2025 19:05 β π 3 π 0 π¬ 0 π 0When _you_ publish a new paper, lots of people notice, lots of people read it. No explainer thread needed. Deservedly so, because you have a reputation for writing great papers.
When Dr. Average Scientist publishes a paper, nobody notices, nobody reads it without some leg work to get it out there
Thanks! Let us know if you have comments or questions
19.05.2025 16:13 β π 1 π 0 π¬ 0 π 0In other words:
Plasticity rules like Oja's let us go beyond studying how synaptic plasticity in the brain can _match_ the performance of backprop.
Now, we can study how synaptic plasticity can _beat_ backprop in challenging, but realistic learning scenarios.
Finally, we meta-learned pure plasticity rules with no weight transport, extending our previous work. When Oja's rule was included, the meta-learned rule _outperformed_ pure backprop.
19.05.2025 15:33 β π 2 π 0 π¬ 1 π 0We find that Oja's rule works, in part, by preserving information about inputs in hidden layers. This is related to its known properties in forming orthogonal representations. Check the paper for more details.
19.05.2025 15:33 β π 2 π 0 π¬ 1 π 0Vanilla RNNs trained with pure BPTT fail on simple memory tasks. Adding Oja's rule to BPTT drastically improves performance.
19.05.2025 15:33 β π 2 π 0 π¬ 1 π 0We often forget how important careful weight initialization is for training neural nets because our software initializes them for us. Adding Oja's rule to backprop also eliminates the need for careful weight initialization.
19.05.2025 15:33 β π 2 π 0 π¬ 1 π 0We propose that plasticity rules like Oja's rule might be part of the answer. Adding Oja's rule to backprop improves learning in deep networks in an online setting (batch size 1).
19.05.2025 15:33 β π 2 π 0 π¬ 1 π 0For example, a 10-layer ffwd network trained on MNIST using online learning (batch size 1) performs poorly when trained with pure backprop. How does the brain learn effectively without all of these engineering hacks?
19.05.2025 15:33 β π 1 π 0 π¬ 1 π 0In our new preprint, we dug deeper into this observation. Our motivation is that modern machine learning depends on lots of engineering hacks beyond pure backprop: gradients averaged over batches, batchnorm, momentum, etc. These hacks don't have clear, direct biological analogues.
19.05.2025 15:33 β π 1 π 0 π¬ 1 π 0In previous work on this question, we meta-learned linear combos of plasticity rules. In doing so, we noticed something intersting:
One plasticity rule improved learning, but its weight updates weren't aligned with backprop's. It was doing something different. That rule is Oja's plasticity rule.
A lot of work in "NeuroAI," including our own, seeks to understand how synaptic plasticity rules can match the performance of backprop in training neural nets.
19.05.2025 15:33 β π 0 π 0 π¬ 1 π 0New preprint with my postdoc, Navid Shervani-Tabar, and former postdoc, Marzieh Alireza Mirhoseini.
Ojaβs plasticity rule overcomes challenges of training neural networks under biological constraints.
arxiv.org/abs/2408.08408
A scientific figure blueprint guide!
If this seems empty it's bcz I don't plan to use it anytime soon!
I made this figure panel size guide to avoid thinking about dimensions every time. Apparently this post is going to be a π§΅! So feel free to bookmark it and save some time of yours.
16.05.2025 09:50 β π 84 π 19 π¬ 7 π 7Interesting comment, but you need to define what you mean by "neuroanatomy." Does such a thing actually exist? As a thing in itself or as a phenomenon? What would Kant have to say? ;)
15.05.2025 14:44 β π 0 π 0 π¬ 1 π 0Sorry, I didn't mean to phrase that antagonistically.
I just think that unless we're talking just about anatomy and we're restricting to a direct synaptic pathway (which maybe you are) then it's difficult to make this type of question precise without concluding that everything can query everything
Unless we're talking about a direct synapse, I don't know how we can expect to answer this question meaningfully when a neuromuscular junction in my pinky toe can "readout" and "query" photoreceptors in my retina.
13.05.2025 14:27 β π 0 π 0 π¬ 1 π 0Thanks. Yeah, I think this example helps clarify 2 points:
1) large negative eigenvalues are not necessary for LRS, and
2) high-dim input and stable dynamics are not sufficient for high-dim responses.
Motivated by this conversation, I added eigenvalues to the plot and edited the text a bit, thx!
Well deserved. Congratulations, Adrienne!
01.05.2025 01:56 β π 2 π 0 π¬ 0 π 0^ I feel like this is a problem you'd be good at tackling
26.04.2025 13:51 β π 3 π 0 π¬ 0 π 0One thing I tried to work out, but couldn't: We assumed a discrete number of large sing vals of W, but what if there a continuous, but slow decay (eg, power law).
How to derive the decay rate of the var expl vals in terms of the sing val decay rate and the overlap matrix?
Maybe it's possible to write this condition on sing vals of P in terms of eigenspectrum of W in a simple way, but I don't know how.
26.04.2025 12:38 β π 0 π 0 π¬ 1 π 0