Memming Park's Avatar

Memming Park

@memming.bsky.social

Computational Neuroscientist & Neurotechnologist

558 Followers  |  299 Following  |  55 Posts  |  Joined: 30.10.2023  |  2.0264

Latest posts by memming.bsky.social on Bluesky

Applications for 2026 entry to the Gatsby Bridging Programme (7-week maths summer school) will open on 19 Jan and close on 16 Feb. Designed for students who wish to pursue a postgrad research degree in theoretical neuroscience or foundational machine learning but whose degree programme lacks a strong maths focus. Applications from students in underrepresented groups in STEM strongly encouraged. A small number of bursaries available.
Register for the information webinar on 23 Jan.

Applications for 2026 entry to the Gatsby Bridging Programme (7-week maths summer school) will open on 19 Jan and close on 16 Feb. Designed for students who wish to pursue a postgrad research degree in theoretical neuroscience or foundational machine learning but whose degree programme lacks a strong maths focus. Applications from students in underrepresented groups in STEM strongly encouraged. A small number of bursaries available. Register for the information webinar on 23 Jan.

πŸ“’ Applications open on 19 Jan for the 7-week #Mathematics #SummerSchool in London. You will develop the maths skills and intuition necessary to enter the #TheoreticalNeuroscience / #MachineLearning field.

Find out more & register for the information webinar πŸ‘‰ www.ucl.ac.uk/life-science...

15.01.2026 14:37 β€” πŸ‘ 23    πŸ” 25    πŸ’¬ 0    πŸ“Œ 1
Preview
Scalable models for high-dimensional neural data A COSYNE 2014 Workshop

Back in 2014, I co-organized a #COSYNE workshop on scalable modeling. scalablemodels.wordpress.com #timeflies

13.01.2026 13:51 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

πŸš¨πŸ“œ+🧡🚨 Very excited about this work showing that people with no hand function following a spinal cord injury can control the activity of motor units from those muscles to perform 1D, 2D and 3D tasks, play video games, or navigate a virtual wheelchair

By a wonderful team co-mentored w Dario Farina

07.01.2026 22:36 β€” πŸ‘ 72    πŸ” 17    πŸ’¬ 1    πŸ“Œ 2
Preview
Department Chair, Duke Neurobiology - Duke University, Durham job with Duke University - School of Medicine | 12852576 The Duke University School of Medicine (SOM) seeks a distinguished neuroscientist to serve as the next Chair of the Department of Neurobiology.

My department, Duke Neurobiology, is searching for a new chair. Ad below. Come work with me, @jmgrohneuro.bsky.social @ennatsew.bsky.social @jorggrandl.bsky.social @jnklab.bsky.social @sbilbo.bsky.social @neurocircuits.bsky.social and many other amazing folks! @dukemedschool.bsky.social

13.01.2026 01:21 β€” πŸ‘ 22    πŸ” 25    πŸ’¬ 0    πŸ“Œ 2
Client Challenge

We have reached a situation where (1) the time/resources spent by people applying for grant X often outweighs (2) the time/resources awarded.

For these grants, society loses net time/resources.

www.nature.com/articles/d41...

13.01.2026 09:44 β€” πŸ‘ 53    πŸ” 10    πŸ’¬ 1    πŸ“Œ 2

How can I accelerate breakdown of caffeine in my body? I will need to increase CYP1A2 (P450) activity (without smoking). Vigorous exercise over 30 days was shown to increase it up to 70%? pubmed.ncbi.nlm.nih....

05.01.2026 15:08 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Learning a lot while preparing for a lecture on RNNs for neuroscience.

02.01.2026 17:34 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

according to TripIt, I traveled 240Mm (yes, that's mega-meters), 10 countries in 2025. Oh my. I'm definitely going to travel much much less this year.

02.01.2026 17:27 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

according to last.fm, my favourite artist of 2025 was #LauraThorn. Scrobbled 493/5728 times (just one song; La poupΓ©e monte le son). 0.01% of fans worldwide for the song. Of 2672 unique tracks I listened to. Also #1 on Beatrice Rana's Goldberg Variations album.

02.01.2026 15:16 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

A major personal goal for 2025 was extensive networking. I met so many interesting people around the world, which helped enable these meetings and future collaborations.

30.12.2025 17:12 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Not everything worked out. I submitted six major grant applications in 2025; five were rejected, despite substantial time and resources invested (still waiting to hear back on the last one). All 3 of our NeurIPS submissions were rejected.

30.12.2025 17:12 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Neurocybernetics at Scale 2025 - Neurocybernetics

In Oct, I co-organized Neurocybernetics at Scale, a three-day conference with ~300 participants, aimed at rethinking how neuroscience can scale in the modern era and how we might better integrate across levels, methods, and communities:
πŸ‘‰ neurocybernetics.cc/neurocyberne...

30.12.2025 17:12 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Beyond Clarity

In April, I co-organized Beyond Clarity, a small, closed interdisciplinary meeting focused on how to overcome the gaps in the combinatorial yet discrete limits of language create gaps in meaning across fields (with Sool Park):
πŸ‘‰ beyond-clarity.github.io

30.12.2025 17:12 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Highlights of 2025

Ayesha Vermani defended her PhD thesis this year. She helped jump start a new direction in integrative neuroscience:

Vermani et al. (2025), Meta-dynamical state space models for integrative neural data analysis. ICLR
πŸ‘‰ openreview.net/forum?id=SRp...
πŸ‘‰ youtu.be/SiXxPmkpYF8

30.12.2025 17:12 β€” πŸ‘ 11    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

I admire all who have donated and will donate to OpenReview. Thank you.

23.12.2025 11:15 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Today, the NeurIPS Foundation is proud to announce a $500,000 donation to OpenReview, supporting the infrastructure that makes modern ML research possible.

blog.neurips.cc/2025/12/15/s...

15.12.2025 13:00 β€” πŸ‘ 44    πŸ” 6    πŸ’¬ 0    πŸ“Œ 0
About CR | Champalimaud Foundation

πŸ—£οΈ English is the working language.

Curious about our culture, values, and scientific environment?

πŸ‘‰ Learn more: www.fchampalimaud.org/about-cr

16.12.2025 19:20 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
About CR | Champalimaud Foundation

INDP includes an initial year of advanced coursework πŸ“š + three lab rotations πŸ”¬, followed by PhD research. We welcome talented, motivated applicants from neuroscience, as well as physics, mathematics, statistics, computer science, electrical/biomedical engineering βš™οΈ, and related quantitative fields.

16.12.2025 19:20 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Fully-funded International Neuroscience Doctoral Programme🧠 Champalimaud Foundation, Lisbon, Portugal πŸ‡΅πŸ‡Ή

Deadline: Jan 31, 2026
fchampalimaud.org/champalimaud...

Research program spans systems/computational/theoretical/clinical/sensory/motor neuroscience, neuroethology, intelligence, and more!!

16.12.2025 19:20 β€” πŸ‘ 26    πŸ” 17    πŸ’¬ 1    πŸ“Œ 0

You can have labelled lines and copies of microcircuits, too. But, I'm just acknowledging some evolutionary pressure to use neuron-centric codes. (in fact I'm fully a mixed selectivity kinda neuroscientist.)

12.12.2025 15:27 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

One advantage of monosemantic, sharply-tuned, grandmother-cell, axis-aligned, neuron-centric representation as opposed to polysemantic, mixed-selective, oblique population code is that it can benefit from evolution. Genes are good at operating at the cell level. #neuroscience

12.12.2025 13:56 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Theoretical Insights on Training Instability in Deep Learning TUTORIAL
uuujf.github.io/inst...

gradient flow-like regime is slow and can overfit while large (but not too large) step size can trasiently go far, converge faster, and find better solutions #optimization #NeurIPS2025

07.12.2025 00:02 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Why Diffusion Models Don’t Memorize: The Role of Implicit Dynamical Regularization in Training | OpenReview Diffusion models have achieved remarkable success across a wide range of generative tasks. A key challenge is understanding the mechanisms that prevent their memorization of training data and allow generalization. In this work, we investigate the role of the training dynamics in the transition from generalization to memorization. Through extensive experiments and theoretical analysis, we identify two distinct timescales: an early time $\tau_\mathrm{gen}$ at which models begin to generate high-quality samples, and a later time $\tau_\mathrm{mem}$ beyond which memorization emerges. Crucially, we find that $\tau_\mathrm{mem}$ increases linearly with the training set size $n$, while $\tau_\mathrm{gen}$ remains constant. This creates a growing window of training times with $n$ where models generalize effectively, despite showing strong memorization if training continues beyond it. It is only when $n$ becomes larger than a model-dependent threshold that overfitting disappears at infinite training times. These findings reveal a form of implicit dynamical regularization in the training dynamics, which allow to avoid memorization even in highly overparameterized settings. Our results are supported by numerical experiments with standard U-Net architectures on realistic and synthetic datasets, and by a theoretical analysis using a tractable random features model studied in the high-dimensional limit.

score/flow matching diffusion models only starts memorizing when trained for long enough
Bonnaire, T., Urfin, R., Biroli, G., & Mezard, M. (2025). Why Diffusion Models Don’t Memorize: The Role of Implicit Dynamical Regularization in Training.

07.12.2025 00:02 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Learning Dynamics of RNNs in Closed-Loop Environments Recurrent neural networks (RNNs) trained on neuroscience-inspired tasks offer powerful models of brain computation. However, typical training paradigms rely on open-loop, supervised settings,...

analysis of coupled dynamical system to study learning #cybernetics #learningdynamics
Ger, Y., & Barak, O. (2025). Learning dynamics of RNNs in closed-loop environments. In arXiv [cs.LG]. arXiv. http://arxiv.org/abs...

07.12.2025 00:02 β€” πŸ‘ 9    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Parallelizing MCMC Across the Sequence Length | OpenReview Markov chain Monte Carlo (MCMC) methods are foundational algorithms for Bayesian inference and probabilistic modeling. However, most MCMC algorithms are inherently sequential and their time complexity scales linearly with the sequence length. Previous work on adapting MCMC to modern hardware has therefore focused on running many independent chains in parallel. Here, we take an alternative approach: we propose algorithms to evaluate MCMC samplers in parallel across the chain length. To do this, we build on recent methods for parallel evaluation of nonlinear recursions that formulate the state sequence as a solution to a fixed-point problem and solve for the fixed-point using a parallel form of Newton's method. We show how this approach can be used to parallelize Gibbs, Metropolis-adjusted Langevin, and Hamiltonian Monte Carlo sampling across the sequence length. In several examples, we demonstrate the simulation of up to hundreds of thousands of MCMC samples with only tens of parallel Newton iterations. Additionally, we develop two new parallel quasi-Newton methods to evaluate nonlinear recursions with lower memory costs and reduced runtime. We find that the proposed parallel algorithms accelerate MCMC sampling across multiple examples, in some cases by more than an order of magnitude compared to sequential evaluation.

related:

Tricks to make it even faster.
Zoltowski, D. M., Wu, S., Gonzalez, X., Kozachkov, L., & Linderman, S. (2025). Parallelizing MCMC Across the Sequence Length. The Thirty-Ninth Annual Conference on Neural Information Processing Systems.

07.12.2025 00:02 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Predictability Enables Parallelization of Nonlinear State Space Models | OpenReview The rise of parallel computing hardware has made it increasingly important to understand which nonlinear state space models can be efficiently parallelized. Recent advances have shown that evaluating a state space model can be recast as solving a parallelizable optimization problem, and sometimes this approach yields dramatic speed-ups in evaluation time. However, the factors that govern the difficulty of these optimization problems remain unclear, limiting the larger adoption of the technique. In this work, we establish a precise relationship between the dynamics of a nonlinear system and the conditioning of its corresponding optimization formulation. We show that the predictability of a system, defined as the degree to which small perturbations in state influence future behavior, directly governs the number of optimization steps required for evaluation. In predictable systems, the state trajectory can be computed in $\mathcal{O}((\log T)^2)$ time, where $T$ is the sequence length, a major improvement over the conventional sequential approach. In contrast, chaotic or unpredictable systems exhibit poor conditioning, with the consequence that parallel evaluation converges too slowly to be useful. Importantly, our theoretical analysis demonstrates that for predictable systems, the optimization problem is always well-conditioned, whereas for unpredictable systems, the conditioning degrades exponentially as a function of the sequence length. We validate our claims through extensive experiments, providing practical guidance on when nonlinear dynamical systems can be efficiently parallelized, and highlighting predictability as a key design principle for parallelizable models.

Some of my favorites from #NeurIPS2025

more neg max Lyapunov exp => faster parallelized RNN convergence
Gonzalez, X., Kozachkov, L., Zoltowski, D. M., Clarkson, K. L., & Linderman, S. Predictability Enables Parallelization of Nonlinear State Space Models.

07.12.2025 00:02 β€” πŸ‘ 9    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

This was a fantastic poster presentation!

05.12.2025 18:56 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Post image

Melanie Mitchell's keynote reminds us that it is not easy to evaluate intelligence (AI, babies, animals, etc) and benchmarks can be VERY misleading. #NeurIPS2025

04.12.2025 23:19 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Opposite of double descent in deep neural networks, and questions the optimization world view. :P

03.12.2025 19:32 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Why Diffusion Models Don’t Memorize: The Role of Implicit Dynamical Regularization in Training | OpenReview Diffusion models have achieved remarkable success across a wide range of generative tasks. A key challenge is understanding the mechanisms that prevent their memorization of training data and allow generalization. In this work, we investigate the role of the training dynamics in the transition from generalization to memorization. Through extensive experiments and theoretical analysis, we identify two distinct timescales: an early time $\tau_\mathrm{gen}$ at which models begin to generate high-quality samples, and a later time $\tau_\mathrm{mem}$ beyond which memorization emerges. Crucially, we find that $\tau_\mathrm{mem}$ increases linearly with the training set size $n$, while $\tau_\mathrm{gen}$ remains constant. This creates a growing window of training times with $n$ where models generalize effectively, despite showing strong memorization if training continues beyond it. It is only when $n$ becomes larger than a model-dependent threshold that overfitting disappears at infinite training times. These findings reveal a form of implicit dynamical regularization in the training dynamics, which allow to avoid memorization even in highly overparameterized settings. Our results are supported by numerical experiments with standard U-Net architectures on realistic and synthetic datasets, and by a theoretical analysis using a tractable random features model studied in the high-dimensional limit.

Bonnaire, T., Urfin, R., Biroli, G., & Mezard, M.
Why Diffusion Models Don’t Memorize: The Role of Implicit Dynamical Regularization in Training. The Thirty-Ninth Annual Conference on Neural Information Processing Systems.

03.12.2025 19:32 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@memming is following 20 prominent accounts