W. Jeffrey Johnston - Postdoctoral position ad
By the way, if youβre interested in working together on problems like this, Iβm starting my lab at UCSF this summer. Get in touch if youβre interested in doing a postdoc! More info here: wj2.github.io/postdoc_ad (7/7)
09.01.2026 19:06 β π 29 π 14 π¬ 1 π 3
When and why modular representations emerge
Experimental and theoretical work has argued both for and against the existence of specialized sub-populations of neurons (modules) within single brain regions. By studying artificial neural networks,...
Weβve also rewritten large parts of the manuscript for clarity, as well as further developed some of our experimental predictions. I think the paper is much improved and I encourage you to check it out even if you read the original! Hereβs the link again: doi.org/10.1101/2024... (6/7)
09.01.2026 19:06 β π 11 π 2 π¬ 1 π 0
We also now include a generalization of our approach to a large set of novel task types, and show that our main result generalizes as well! (5/7)
09.01.2026 19:06 β π 3 π 0 π¬ 1 π 0
Though, as before, this benefit can emerge with specialized sub-populations of units (i.e., explicit modularity) or specialized subspaces at the population level (i.e., implicit modularity). We characterize the emergence of both forms of modularity! (4/7)
09.01.2026 19:06 β π 5 π 1 π¬ 1 π 0
We now spend more time introducing our framework, as well as the main benefit of modularity: It provides a factorized solution to complex tasks. (3/7)
09.01.2026 19:06 β π 3 π 0 π¬ 1 π 0
The core result is still the same: Whether or not modularity emerges depends on both the format β or, representational geometry β of the input to the network and the structure of the task it is trained to perform. (2/7)
09.01.2026 19:06 β π 6 π 1 π¬ 1 π 0
When and why do modular representations emerge in neural networks?
@stefanofusi.bsky.social and I posted a preprint answering this question last year, and now it has been extensively revised, refocused, and generalized. Read more here: doi.org/10.1101/2024... (1/7)
09.01.2026 19:06 β π 76 π 18 π¬ 1 π 2
Semi-orthogonal subspaces for value mediate a tradeoff between...
When choosing between options, we must associate their values with the action needed to select them. We hypothesize that the brain solves this binding problem through neural population subspaces....
Thanks for reading! Hereβs another link to the paper: arxiv.org/abs/2309.07766
This is also the second panel in a triptych of new work on how the brain does (or doesnβt) make sense of multiple stimuli. First part is here: twitter.com/wjeffjohnsto...
The last part is coming soon! (11/n)
26.09.2023 15:38 β π 2 π 1 π¬ 0 π 0
Further, we show that these geometric measures are associated with trial-to-trial and average choice behavior. In particular, some suboptimal choices look like spatial (mixing up which value was on the left or right) misbinding errors! (10/n)
26.09.2023 15:38 β π 1 π 1 π¬ 1 π 0
Finally, we show that a similar process unfolds over time: The current offer is represented in a distinct subspace from the remembered past offer β this time, the subspaces are actually orthogonal! (9/n)
26.09.2023 15:37 β π 2 π 1 π¬ 1 π 0
Then, we show that the representational geometry in all but one of the regions we recorded from supports reliable binding; at the same time, the geometry can also support reliable generalization in every region. (8/n)
26.09.2023 15:37 β π 1 π 1 π¬ 1 π 0
We develop a mathematical theory that captures the tradeoff between the reliability of binding and generalization as a function of the representational geometry β which we then relate back to subspace correlation. (7/n)
26.09.2023 15:36 β π 2 π 1 π¬ 1 π 0
That they are not fully orthogonal means that the representation of value may be abstract: A decoder trained to decode the value of left offers could generalize to decode the value of right offers. This is important for learning and generalization to novel situations. (6/n)
26.09.2023 15:36 β π 1 π 1 π¬ 1 π 0
That they are not the same means that this representation *binds* offer value to position, and a decoder can figure out which value corresponds to which offer. This wouldnβt be the case if both values were encoded in the same subspace! (5/n)
26.09.2023 15:36 β π 1 π 1 π¬ 1 π 0
The two subspaces are not perfectly orthogonal β nor are they the same. They are semi-orthogonal β and this is important! (4/n)
26.09.2023 15:35 β π 1 π 1 π¬ 1 π 0
How does the monkey keep the different offer values straight? We show that the value of offers presented on the left is encoded in one subspace of population activity, while the value of offers presented on the right is in a distinct subspace. (3/n)
26.09.2023 15:35 β π 1 π 1 π¬ 1 π 0
In this setup, monkeys chose between two sequentially presented offers based on their expected reward value. While they did this, neural activity was recorded from neurons in five different value-sensitive regions. (2/n)
26.09.2023 15:35 β π 1 π 1 π¬ 1 π 0
Semi-orthogonal subspaces for value mediate a tradeoff between...
When choosing between options, we must associate their values with the action needed to select them. We hypothesize that the brain solves this binding problem through neural population subspaces....
How does the brain represent multiple different things at once in a single population of neurons? @justfineneuro.bsky.social, @benhayden.bsky.social, B Ebitz, M Yoo, and I show that it uses semi-orthogonal subspaces for each item.
Preprint here: arxiv.org/abs/2309.07766
Clouds below! (1/n)
26.09.2023 15:34 β π 29 π 12 π¬ 1 π 0
cognitive neuroscientist, interested in human timing & time perception, neural oscillations, motherΒ², she/her
brainthemind.com
doing cognitive neuroscience, interested in stats and computational methods.
Neurogeneticist interested in the relations between genes, brains, and minds. Author of INNATE (2018) and FREE AGENTS (2023)
Computational Cognitive Neuroscientist at CiNet & Osaka University. Category learning to concepts & everything between (semantic/episodic memory). Cognitive aging/damage in models & brains. To understand the brain & AI.
PI + parent = professional cat-herder β’ inclusiveness β’ he/him β’ studying the neuroscience of communication at Northeastern University
Textbook: The Neuroscience of Language (Cambridge University Press)
http://jonathanpeelle.net
Professor, director of neuroscience lab at Rutgers University β neuroimaging, cognitive control, network neuroscience
Writing book βBrain Flows: How Network Dynamics Compose The Human Mindβ for Princeton University Press
https://www.colelab.org
Physicist by training working in Neuroscience (PhD). Into brain states and their transitions. Newbie climber
https://ldallap.github.io/
Associat Professor at Center for Human Nature, Artificial Intelligence and Neuroscience (CHAIN), at Hokkaido University, working on Artificial Life/Virtual Reality/Embodied Self/Sense of Presence/Altered States/Computational Phenomenology
Grad Student @ SKKU
github: https://github.com/didch1789
Hi!
Iβm Andrew Frankel, a software engineer, machine learning researcher, and neuroscience student.
Lifelong learner | Cog neuro | π¬π§Imperial UG BME graduate | @π°π· @cnir-SKKU
Philosopher / Cognitive Scientist working on self consciousness and social interactions in humans and artificial agents/ Embodiment/ AI / Art & Science
In Lisbon & London
PhD candidate in Deep Learning at the University of Trieste, Italy. Formerly visiting at @ucl.ac.uk. Representations, robustness, empirical methods, kernels, physics & neuroscience. Quantitativist, overenthusiastic tinkerer.
https://ballarin.cc/
Postdoc in the Hayden lab at Baylor College of Medicine studying neural computations of natural language & communication in humans. Sister to someone with autism. F32 NIDCD Fellow | Autism Research Institute funded | she/her. melissafranch.com
Scientist: PhD in mathematics. Working in hyperbolic geometry, analysis in metric spaces and mathematical physics (gravitational waves and general relativity), broad range of other scientific interests.
Here mostly to connect with researchers.
he/him
what kind of world do we wish to live in?
neuroscience phd candidate at NYU CNS β’ advised by Cristina Savin and @eerosim.bsky.social β’ interested in neural principles governing adaptive behavior
Neural dynamics, control, learning. Love talking about the brain, math, poetry, science+arts! Striving for kindness. (She/her)
π Postdoc @ NYU (Prev: UW, UCL, IISc)
π€ https://harshagurnani.github.io/