Jeff Johnston's Avatar

Jeff Johnston

@wjj.bsky.social

Postdoc in the Center for Theoretical Neuroscience at Columbia, previously at the University of Chicago he/they wj2.github.io

193 Followers  |  114 Following  |  18 Posts  |  Joined: 26.09.2023  |  2.1014

Latest posts by wjj.bsky.social on Bluesky

W. Jeffrey Johnston - Postdoctoral position ad

By the way, if you’re interested in working together on problems like this, I’m starting my lab at UCSF this summer. Get in touch if you’re interested in doing a postdoc! More info here: wj2.github.io/postdoc_ad (7/7)

09.01.2026 19:06 β€” πŸ‘ 29    πŸ” 14    πŸ’¬ 1    πŸ“Œ 3
Preview
When and why modular representations emerge Experimental and theoretical work has argued both for and against the existence of specialized sub-populations of neurons (modules) within single brain regions. By studying artificial neural networks,...

We’ve also rewritten large parts of the manuscript for clarity, as well as further developed some of our experimental predictions. I think the paper is much improved and I encourage you to check it out even if you read the original! Here’s the link again: doi.org/10.1101/2024... (6/7)

09.01.2026 19:06 β€” πŸ‘ 11    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Post image

We also now include a generalization of our approach to a large set of novel task types, and show that our main result generalizes as well! (5/7)

09.01.2026 19:06 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Though, as before, this benefit can emerge with specialized sub-populations of units (i.e., explicit modularity) or specialized subspaces at the population level (i.e., implicit modularity). We characterize the emergence of both forms of modularity! (4/7)

09.01.2026 19:06 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

We now spend more time introducing our framework, as well as the main benefit of modularity: It provides a factorized solution to complex tasks. (3/7)

09.01.2026 19:06 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

The core result is still the same: Whether or not modularity emerges depends on both the format – or, representational geometry – of the input to the network and the structure of the task it is trained to perform. (2/7)

09.01.2026 19:06 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

When and why do modular representations emerge in neural networks?

@stefanofusi.bsky.social and I posted a preprint answering this question last year, and now it has been extensively revised, refocused, and generalized. Read more here: doi.org/10.1101/2024... (1/7)

09.01.2026 19:06 β€” πŸ‘ 76    πŸ” 18    πŸ’¬ 1    πŸ“Œ 2
Preview
Semi-orthogonal subspaces for value mediate a tradeoff between... When choosing between options, we must associate their values with the action needed to select them. We hypothesize that the brain solves this binding problem through neural population subspaces....

Thanks for reading! Here’s another link to the paper: arxiv.org/abs/2309.07766
This is also the second panel in a triptych of new work on how the brain does (or doesn’t) make sense of multiple stimuli. First part is here: twitter.com/wjeffjohnsto...

The last part is coming soon! (11/n)

26.09.2023 15:38 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Post image

Further, we show that these geometric measures are associated with trial-to-trial and average choice behavior. In particular, some suboptimal choices look like spatial (mixing up which value was on the left or right) misbinding errors! (10/n)

26.09.2023 15:38 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

Finally, we show that a similar process unfolds over time: The current offer is represented in a distinct subspace from the remembered past offer – this time, the subspaces are actually orthogonal! (9/n)

26.09.2023 15:37 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

Then, we show that the representational geometry in all but one of the regions we recorded from supports reliable binding; at the same time, the geometry can also support reliable generalization in every region. (8/n)

26.09.2023 15:37 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

We develop a mathematical theory that captures the tradeoff between the reliability of binding and generalization as a function of the representational geometry – which we then relate back to subspace correlation. (7/n)

26.09.2023 15:36 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

That they are not fully orthogonal means that the representation of value may be abstract: A decoder trained to decode the value of left offers could generalize to decode the value of right offers. This is important for learning and generalization to novel situations. (6/n)

26.09.2023 15:36 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

That they are not the same means that this representation *binds* offer value to position, and a decoder can figure out which value corresponds to which offer. This wouldn’t be the case if both values were encoded in the same subspace! (5/n)

26.09.2023 15:36 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

The two subspaces are not perfectly orthogonal – nor are they the same. They are semi-orthogonal – and this is important! (4/n)

26.09.2023 15:35 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

How does the monkey keep the different offer values straight? We show that the value of offers presented on the left is encoded in one subspace of population activity, while the value of offers presented on the right is in a distinct subspace. (3/n)

26.09.2023 15:35 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

In this setup, monkeys chose between two sequentially presented offers based on their expected reward value. While they did this, neural activity was recorded from neurons in five different value-sensitive regions. (2/n)

26.09.2023 15:35 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
Semi-orthogonal subspaces for value mediate a tradeoff between... When choosing between options, we must associate their values with the action needed to select them. We hypothesize that the brain solves this binding problem through neural population subspaces....

How does the brain represent multiple different things at once in a single population of neurons? @justfineneuro.bsky.social, @benhayden.bsky.social, B Ebitz, M Yoo, and I show that it uses semi-orthogonal subspaces for each item.
Preprint here: arxiv.org/abs/2309.07766
Clouds below! (1/n)

26.09.2023 15:34 β€” πŸ‘ 29    πŸ” 12    πŸ’¬ 1    πŸ“Œ 0

@wjj is following 20 prominent accounts