Applications for 2026 entry to the Gatsby Bridging Programme (7-week maths summer school) will open on 19 Jan and close on 16 Feb. Designed for students who wish to pursue a postgrad research degree in theoretical neuroscience or foundational machine learning but whose degree programme lacks a strong maths focus. Applications from students in underrepresented groups in STEM strongly encouraged. A small number of bursaries available.
Register for the information webinar on 23 Jan.
π’ Applications open on 19 Jan for the 7-week #Mathematics #SummerSchool in London. You will develop the maths skills and intuition necessary to enter the #TheoreticalNeuroscience / #MachineLearning field.
Find out more & register for the information webinar π www.ucl.ac.uk/life-science...
15.01.2026 14:37 β π 23 π 25 π¬ 0 π 1
Scalable models for high-dimensional neural data
A COSYNE 2014 Workshop
Back in 2014, I co-organized a #COSYNE workshop on scalable modeling. scalablemodels.wordpress.com #timeflies
13.01.2026 13:51 β π 4 π 0 π¬ 0 π 0
π¨π+π§΅π¨ Very excited about this work showing that people with no hand function following a spinal cord injury can control the activity of motor units from those muscles to perform 1D, 2D and 3D tasks, play video games, or navigate a virtual wheelchair
By a wonderful team co-mentored w Dario Farina
07.01.2026 22:36 β π 72 π 17 π¬ 1 π 2
Department Chair, Duke Neurobiology - Duke University, Durham job with Duke University - School of Medicine | 12852576
The Duke University School of Medicine (SOM) seeks a distinguished neuroscientist to serve as the next Chair of the Department of Neurobiology.
My department, Duke Neurobiology, is searching for a new chair. Ad below. Come work with me, @jmgrohneuro.bsky.social @ennatsew.bsky.social @jorggrandl.bsky.social @jnklab.bsky.social @sbilbo.bsky.social @neurocircuits.bsky.social and many other amazing folks! @dukemedschool.bsky.social
13.01.2026 01:21 β π 22 π 25 π¬ 0 π 2
Client Challenge
We have reached a situation where (1) the time/resources spent by people applying for grant X often outweighs (2) the time/resources awarded.
For these grants, society loses net time/resources.
www.nature.com/articles/d41...
13.01.2026 09:44 β π 53 π 10 π¬ 1 π 2
How can I accelerate breakdown of caffeine in my body? I will need to increase CYP1A2 (P450) activity (without smoking). Vigorous exercise over 30 days was shown to increase it up to 70%? pubmed.ncbi.nlm.nih....
05.01.2026 15:08 β π 0 π 0 π¬ 0 π 0
Learning a lot while preparing for a lecture on RNNs for neuroscience.
02.01.2026 17:34 β π 2 π 0 π¬ 0 π 0
according to TripIt, I traveled 240Mm (yes, that's mega-meters), 10 countries in 2025. Oh my. I'm definitely going to travel much much less this year.
02.01.2026 17:27 β π 0 π 0 π¬ 0 π 0
according to last.fm, my favourite artist of 2025 was #LauraThorn. Scrobbled 493/5728 times (just one song; La poupΓ©e monte le son). 0.01% of fans worldwide for the song. Of 2672 unique tracks I listened to. Also #1 on Beatrice Rana's Goldberg Variations album.
02.01.2026 15:16 β π 1 π 0 π¬ 1 π 0
A major personal goal for 2025 was extensive networking. I met so many interesting people around the world, which helped enable these meetings and future collaborations.
30.12.2025 17:12 β π 1 π 0 π¬ 0 π 0
Not everything worked out. I submitted six major grant applications in 2025; five were rejected, despite substantial time and resources invested (still waiting to hear back on the last one). All 3 of our NeurIPS submissions were rejected.
30.12.2025 17:12 β π 1 π 0 π¬ 1 π 0
Neurocybernetics at Scale 2025 - Neurocybernetics
In Oct, I co-organized Neurocybernetics at Scale, a three-day conference with ~300 participants, aimed at rethinking how neuroscience can scale in the modern era and how we might better integrate across levels, methods, and communities:
π neurocybernetics.cc/neurocyberne...
30.12.2025 17:12 β π 2 π 0 π¬ 1 π 0
Beyond Clarity
In April, I co-organized Beyond Clarity, a small, closed interdisciplinary meeting focused on how to overcome the gaps in the combinatorial yet discrete limits of language create gaps in meaning across fields (with Sool Park):
π beyond-clarity.github.io
30.12.2025 17:12 β π 1 π 0 π¬ 1 π 0
Highlights of 2025
Ayesha Vermani defended her PhD thesis this year. She helped jump start a new direction in integrative neuroscience:
Vermani et al. (2025), Meta-dynamical state space models for integrative neural data analysis. ICLR
π openreview.net/forum?id=SRp...
π youtu.be/SiXxPmkpYF8
30.12.2025 17:12 β π 11 π 1 π¬ 1 π 0
I admire all who have donated and will donate to OpenReview. Thank you.
23.12.2025 11:15 β π 2 π 0 π¬ 0 π 0
Today, the NeurIPS Foundation is proud to announce a $500,000 donation to OpenReview, supporting the infrastructure that makes modern ML research possible.
blog.neurips.cc/2025/12/15/s...
15.12.2025 13:00 β π 44 π 6 π¬ 0 π 0
About CR | Champalimaud Foundation
π£οΈ English is the working language.
Curious about our culture, values, and scientific environment?
π Learn more: www.fchampalimaud.org/about-cr
16.12.2025 19:20 β π 1 π 0 π¬ 0 π 0
About CR | Champalimaud Foundation
INDP includes an initial year of advanced coursework π + three lab rotations π¬, followed by PhD research. We welcome talented, motivated applicants from neuroscience, as well as physics, mathematics, statistics, computer science, electrical/biomedical engineering βοΈ, and related quantitative fields.
16.12.2025 19:20 β π 1 π 0 π¬ 1 π 0
Fully-funded International Neuroscience Doctoral Programmeπ§ Champalimaud Foundation, Lisbon, Portugal π΅πΉ
Deadline: Jan 31, 2026
fchampalimaud.org/champalimaud...
Research program spans systems/computational/theoretical/clinical/sensory/motor neuroscience, neuroethology, intelligence, and more!!
16.12.2025 19:20 β π 26 π 17 π¬ 1 π 0
You can have labelled lines and copies of microcircuits, too. But, I'm just acknowledging some evolutionary pressure to use neuron-centric codes. (in fact I'm fully a mixed selectivity kinda neuroscientist.)
12.12.2025 15:27 β π 1 π 0 π¬ 0 π 0
One advantage of monosemantic, sharply-tuned, grandmother-cell, axis-aligned, neuron-centric representation as opposed to polysemantic, mixed-selective, oblique population code is that it can benefit from evolution. Genes are good at operating at the cell level. #neuroscience
12.12.2025 13:56 β π 2 π 0 π¬ 1 π 0
Theoretical Insights on Training Instability in Deep Learning TUTORIAL
uuujf.github.io/inst...
gradient flow-like regime is slow and can overfit while large (but not too large) step size can trasiently go far, converge faster, and find better solutions #optimization #NeurIPS2025
07.12.2025 00:02 β π 4 π 0 π¬ 0 π 0
Why Diffusion Models Donβt Memorize: The Role of Implicit Dynamical Regularization in Training | OpenReview
Diffusion models have achieved remarkable success across a wide range of generative tasks. A key challenge is understanding the mechanisms that prevent their memorization of training data and allow generalization. In this work, we investigate the role of the training dynamics in the transition from generalization to memorization. Through extensive experiments and theoretical analysis, we identify two distinct timescales: an early time $\tau_\mathrm{gen}$ at which models begin to generate high-quality samples, and a later time $\tau_\mathrm{mem}$ beyond which memorization emerges. Crucially, we find that $\tau_\mathrm{mem}$ increases linearly with the training set size $n$, while $\tau_\mathrm{gen}$ remains constant. This creates a growing window of training times with $n$ where models generalize effectively, despite showing strong memorization if training continues beyond it. It is only when $n$ becomes larger than a model-dependent threshold that overfitting disappears at infinite training times.
These findings reveal a form of implicit dynamical regularization in the training dynamics, which allow to avoid memorization even in highly overparameterized settings. Our results are supported by numerical experiments with standard U-Net architectures on realistic and synthetic datasets, and by a theoretical analysis using a tractable random features model studied in the high-dimensional limit.
score/flow matching diffusion models only starts memorizing when trained for long enough
Bonnaire, T., Urfin, R., Biroli, G., & Mezard, M. (2025). Why Diffusion Models Donβt Memorize: The Role of Implicit Dynamical Regularization in Training.
07.12.2025 00:02 β π 1 π 0 π¬ 1 π 0
Parallelizing MCMC Across the Sequence Length | OpenReview
Markov chain Monte Carlo (MCMC) methods are foundational algorithms for Bayesian inference and probabilistic modeling. However, most MCMC algorithms are inherently sequential and their time complexity scales linearly with the sequence length. Previous work on adapting MCMC to modern hardware has therefore focused on running many independent chains in parallel. Here, we take an alternative approach: we propose algorithms to evaluate MCMC samplers in parallel across the chain length. To do this, we build on recent methods for parallel evaluation of nonlinear recursions that formulate the state sequence as a solution to a fixed-point problem and solve for the fixed-point using a parallel form of Newton's method. We show how this approach can be used to parallelize Gibbs, Metropolis-adjusted Langevin, and Hamiltonian Monte Carlo sampling across the sequence length. In several examples, we demonstrate the simulation of up to hundreds of thousands of MCMC samples with only tens of parallel Newton iterations. Additionally, we develop two new parallel quasi-Newton methods to evaluate nonlinear recursions with lower memory costs and reduced runtime. We find that the proposed parallel algorithms accelerate MCMC sampling across multiple examples, in some cases by more than an order of magnitude compared to sequential evaluation.
related:
Tricks to make it even faster.
Zoltowski, D. M., Wu, S., Gonzalez, X., Kozachkov, L., & Linderman, S. (2025). Parallelizing MCMC Across the Sequence Length. The Thirty-Ninth Annual Conference on Neural Information Processing Systems.
07.12.2025 00:02 β π 2 π 0 π¬ 1 π 0
Predictability Enables Parallelization of Nonlinear State Space Models | OpenReview
The rise of parallel computing hardware has made it increasingly important to understand which nonlinear state space models can be efficiently parallelized. Recent advances have shown that evaluating a state space model can be recast as solving a parallelizable optimization problem, and sometimes this approach yields dramatic speed-ups in evaluation time. However, the factors that govern the difficulty of these optimization problems remain unclear, limiting the larger adoption of the technique. In this work, we establish a precise relationship between the dynamics of a nonlinear system and the conditioning of its corresponding optimization formulation. We show that the predictability of a system, defined as the degree to which small perturbations in state influence future behavior, directly governs the number of optimization steps required for evaluation. In predictable systems, the state trajectory can be computed in $\mathcal{O}((\log T)^2)$ time, where $T$ is the sequence length, a major improvement over the conventional sequential approach. In contrast, chaotic or unpredictable systems exhibit poor conditioning, with the consequence that parallel evaluation converges too slowly to be useful. Importantly, our theoretical analysis demonstrates that for predictable systems, the optimization problem is always well-conditioned, whereas for unpredictable systems, the conditioning degrades exponentially as a function of the sequence length. We validate our claims through extensive experiments, providing practical guidance on when nonlinear dynamical systems can be efficiently parallelized, and highlighting predictability as a key design principle for parallelizable models.
Some of my favorites from #NeurIPS2025
more neg max Lyapunov exp => faster parallelized RNN convergence
Gonzalez, X., Kozachkov, L., Zoltowski, D. M., Clarkson, K. L., & Linderman, S. Predictability Enables Parallelization of Nonlinear State Space Models.
07.12.2025 00:02 β π 9 π 1 π¬ 1 π 0
This was a fantastic poster presentation!
05.12.2025 18:56 β π 4 π 1 π¬ 0 π 0
Melanie Mitchell's keynote reminds us that it is not easy to evaluate intelligence (AI, babies, animals, etc) and benchmarks can be VERY misleading. #NeurIPS2025
04.12.2025 23:19 β π 4 π 0 π¬ 0 π 0
Opposite of double descent in deep neural networks, and questions the optimization world view. :P
03.12.2025 19:32 β π 2 π 0 π¬ 0 π 0
Why Diffusion Models Donβt Memorize: The Role of Implicit Dynamical Regularization in Training | OpenReview
Diffusion models have achieved remarkable success across a wide range of generative tasks. A key challenge is understanding the mechanisms that prevent their memorization of training data and allow generalization. In this work, we investigate the role of the training dynamics in the transition from generalization to memorization. Through extensive experiments and theoretical analysis, we identify two distinct timescales: an early time $\tau_\mathrm{gen}$ at which models begin to generate high-quality samples, and a later time $\tau_\mathrm{mem}$ beyond which memorization emerges. Crucially, we find that $\tau_\mathrm{mem}$ increases linearly with the training set size $n$, while $\tau_\mathrm{gen}$ remains constant. This creates a growing window of training times with $n$ where models generalize effectively, despite showing strong memorization if training continues beyond it. It is only when $n$ becomes larger than a model-dependent threshold that overfitting disappears at infinite training times.
These findings reveal a form of implicit dynamical regularization in the training dynamics, which allow to avoid memorization even in highly overparameterized settings. Our results are supported by numerical experiments with standard U-Net architectures on realistic and synthetic datasets, and by a theoretical analysis using a tractable random features model studied in the high-dimensional limit.
Bonnaire, T., Urfin, R., Biroli, G., & Mezard, M.
Why Diffusion Models Donβt Memorize: The Role of Implicit Dynamical Regularization in Training. The Thirty-Ninth Annual Conference on Neural Information Processing Systems.
03.12.2025 19:32 β π 1 π 0 π¬ 1 π 0
Scientist β’ Cognitive and Systems Neuroscience β’ Dynamic Minds and Brains β’ Postdoc at the University of Pittsburgh β’ Open Science β’ Solidarity β’ he/him
computational affective motivational (neuro)psychiatry. interested in serotonin, stress, & decision-making. K99 Fellow at Brown. https://debyeeneuro.com she/her.
Working towards the safe development of AI for the benefit of all at UniversitΓ© de MontrΓ©al, LawZero and Mila.
A.M. Turing Award Recipient and most-cited AI researcher.
https://lawzero.org/en
https://yoshuabengio.org/profile/
Neuro + AI Research Scientist at DeepMind; Affiliate Professor at Columbia Center for Theoretical Neuroscience.
Likes studying learning+memory, hippocampi, and other things brains have and do, too.
she/her.
Professor in Computational Neuroscience at Imperial College London
Staff Research Scientist at Google DeepMind. Artificial and biological brains π€ π§
π distributional information and syntactic structure in the π§ | πΌ postdoc @ UniversitΓ© de GenΓ¨ve | π MPI for Psycholinguistics, BCBL, Utrecht University | π¨ | she/her
::language, cognitive science, neural dynamics::
Lise Meitner Group Leader, Max Planck Institute for Psycholinguistics |
Principal Investigator, Donders Centre for Cognitive Neuroimaging, Radboud University |
http://www.andreaemartin.com/
lacns.GitHub.io
Max Planck group leader at ESI Frankfurt | human cognition, fMRI, MEG, computation | sciences with the coolest (phd) students et al. | she/her
Neuro-AI Postdoc @ MPI Biological Cybernetics. Previously @Harvard, A*STAR & NUS. πΈπ¬
Computational Neuroscience | PhD. candidate at @cmc-lab.bsky.social | Fine Arts, Music
Scientist & skeptic. Dad. Book addict. Pathologically curious. Origins and Evolution of Complexity, Synthetic Transitions, Liquid Brains, and Earth Terraformation. ICREA + SFI professor. Author. Secular humanist.
Brain-body scientist. Microglia, neurodevelopment, and fetal programming of adult disease. Prof and Interim Chair of Neurobiology at Duke Univ.
Neuroscientist, retina nerd at Duke. Dept of Neurobiology & Duke Eye Center. Studying nervous system development, cell-cell recognition, and vision.
Precision Biophysics and Neuroscience
What is the brain?
Personal account, all views my own. Reposts meant as informative and not implying endorsement.
neuroscientist at Instituto de Neurociencias (https://in.umh-csic.es/en/grupos/neural-circuits-in-vision-for-action) co-owner at XAOΞ£ ΞΌbrewing co. (Chania)
El IN es el mayor centro financiado con fondos pΓΊblicos dedicado a la investigaciΓ³n del cerebro tanto en condiciones normales como patolΓ³gicas en EspaΓ±a.
Centro de Excelencia Severo Ochoa desde 2014.
https://in.umh-csic.es/es/