Republicans want states to cut food aid errors. Their bill could do the opposite: www.nytimes.com/2025/07/02/u...
02.07.2025 22:16 β π 97 π 20 π¬ 8 π 2@chriswlynn.bsky.social
Studying the statistical physics of the brain π§ & other complex systems π¦ | Asst Prof of Physics & QBio at Yale Lab: lynnlab.yale.edu/
Republicans want states to cut food aid errors. Their bill could do the opposite: www.nytimes.com/2025/07/02/u...
02.07.2025 22:16 β π 97 π 20 π¬ 8 π 2Thrilled to see our TinyRNN paper in @nature! We show how tiny RNNs predict choices of individual subjects accurately while staying fully interpretable. This approach can transform how we model cognitive processes in both healthy and disordered decisions. doi.org/10.1038/s415...
02.07.2025 19:03 β π 319 π 137 π¬ 7 π 4Interested in coarse-graining, irreversibility, or neural activity in the hippocampus?
If so, check out our new preprint exploring how maximizing the irreversibility preserved from microscopic dynamics leads to interpretable coarse-grained descriptions of biological systems!
Living systems operate nonequilibrium processes across many scales in space and time. Is there a model-free way to bridge the descriptions at different levels of coarse-graining? Here we find that preserving the evidence of time-reversal symmetry breaking works remarkably well!
05.06.2025 19:26 β π 8 π 1 π¬ 0 π 0Check out the preprint for much more: "Coarse-graining dynamics to maximize irreversibility"
And a massive shout out to the leaders of the project: Qiwei Yu (@qiweiyu.bsky.socialβ¬) and Matt Leighton (@mleighton.bsky.socialβ¬)
In neural dynamics in the hippocampus, the maximum irreversibility coarse-graining uncovers a large-scale loop of flux in neural space that is directly driven by the animal's movement in physical space.
05.06.2025 18:16 β π 1 π 0 π¬ 1 π 1In chemical oscillators, the maximum irreversibility coarse-graining picks out macroscopic loops of flux that dominate the dynamics.
05.06.2025 18:16 β π 1 π 0 π¬ 1 π 0Across a range of living systems, this maximum irreversibility coarse-graining uncovers key biological functions.
For example, in models of kinesin (a motor protein that ships cargo inside your cells), we can derive simplified dynamics without losing any irreversibility.
When living systems burn energy, they drive irreversible dynamics and produce entropy.
Under coarse-graining, the apparent irreversibility can only decrease.
This means that -- at every level of description -- there's a unique coarse-graining with maximum irreversibility.
Biology consumes energy at the microscale to power functions across all scales: From proteins and cells to entire populations of animals.
Led by @qiweiyu.bsky.socialβ¬ and @mleighton.bsky.socialβ¬, we study how coarse-graining can help to bridge this gap ππ§΅
arxiv.org/abs/2506.01909
Check out the preprint for much more: "Coarse-graining dynamics to maximize irreversibility"
And a massive shout out to the leaders of the project: Qiwei Yu (@qiweiyu.bsky.social) and Matt Leighton
arxiv.org/abs/2506.01909
In neural dynamics in the hippocampus, the maximum irreversibility coarse-graining uncovers a large-scale loop of flux in neural space that is directly driven by the animal's movement in physical space.
05.06.2025 17:49 β π 0 π 0 π¬ 1 π 0In chemical oscillators, the maximum irreversibility coarse-graining picks out macroscopic loops of flux that dominate the dynamics.
05.06.2025 17:49 β π 0 π 0 π¬ 1 π 0Across a range of living systems, this maximum irreversibility coarse-graining uncovers key biological functions.
For example, in models of kinesin (a motor protein that ships cargo inside your cells), we can derive simplified dynamics without losing any irreversibility.
When living systems burn energy, they drive irreversible dynamics and produce entropy.
Under coarse-graining, the apparent irreversibility can only decrease.
This means that -- at every level of description -- there's a unique coarse-graining with maximum irreversibility.
Science is built on public trust, federal funding, and academic freedom. All of these are currently being eroded, but what can we do?
Join us for an online panel on "Being a Voice for Science" (June 11, 2-3pm ET).
Info and registration: engage.aps.org/dbio/resourc...
Apply or nominate for DBIO awards by June 2! Awards include:
APS Fellowship
Max DelbrΓΌck Prize in Biological Physics
Award for Outstanding Doctoral Thesis Research in Biological Physics
Huge shoutout to my student David Carcamo for leading this paper, and Nick Weaver and Purushottam Dixit for invaluable contributions! β€οΈ
16.05.2025 17:14 β π 0 π 0 π¬ 0 π 0...but these applications are limited by our ability to solve difficult statistical physics problems! We thus need new creative methods in order to construct optimal compressions of complex systems.
16.05.2025 17:14 β π 0 π 0 π¬ 1 π 0We review emerging applications, which range from neuroscience and biology to machine learning and engineering...
16.05.2025 17:14 β π 0 π 0 π¬ 1 π 0Starting only with MDL, we show that the optimal details provide as much information about the data as possible while remaining maximally random with regard to all unobserved details
This "minimax entropy" principle was proposed 25 years ago but remains largely unexplored
Information theory makes these intuitions concrete: Each model gives us an encoding of the data, and the shortest code provides the best compression of the data. This is the minimum description length (MDL) principle
16.05.2025 17:14 β π 0 π 0 π¬ 1 π 0When constructing models of the world, we aim for good compressions: models that are as accurate as possible with as few details as possible. But which details should we include in a model?
An answer lies in the "minimax entropy" principle π
arxiv.org/abs/2505.01607
What is the connection between information theory and statistical physics? And how can this connection help us understand the brain?
Final installment with the Simplifying Complexity podcast (
@bhcomplexity.bsky.social)
podcasts.apple.com/us/podcast/w...
Statistical physics and information theory may seem daunting, but with a little insight they can become intuitive and are actually deeply related.
A fun chat on the Simplifying Complexity podcast (@bhcomplexity.bsky.social)
podcasts.apple.com/us/podcast/w...
Hear Christopher Lynn from Yale on the latest episode of #SimplifyingComplexity.
How can we use the ideas of complexity and network science to better understand the brain?
Apple: buff.ly/3VTjG9I
Spotify: buff.ly/3WaR9MA
π§ͺβοΈ #complex #complexity #ComplexSystems #ComplexityScience
For much more, see the preprint: "Simple low-dimensional computations explain variability in neuronal activity"
16.04.2025 13:49 β π 0 π 0 π¬ 0 π 0Moreover, the inferred connection weights are 1. sparse, 2. heavy-tailed, 3. balanced, and 4. directed -- all key features observed in synaptic wiring between neurons
16.04.2025 13:49 β π 0 π 0 π¬ 1 π 0With only a small number of direct inputs (no interactions between inputs) we are able to predict complex higher-order dependencies on multiple inputs
16.04.2025 13:49 β π 0 π 0 π¬ 1 π 0This means that real neurons are closely approximated *quantitatively* by the first artificial neuron proposed by McCulloch and Pitts in 1943
16.04.2025 13:49 β π 0 π 0 π¬ 1 π 0