๐ข PhD position in the NeuroAI of Language
Why can LLMs predict brain activity so well? We're hiring a PhD student to find out -- AI interpretability meets neuroimaging
Deadline March 20
Please RT ๐
๐
mpi.nl/career-education/vacancies/vacancy/fully-funded-4-year-phd-position-neuroai-language
05.03.2026 13:34 โ
๐ 47
๐ 36
๐ฌ 2
๐ 1
Original is from 2021. Still relevant ๐ง ๐ฆ ๐งช
I wonder how many kinds of templates would be needed to capture most โcannedโ cog/neuro science out thereโฆ
16.02.2026 10:37 โ
๐ 9
๐ 1
๐ฌ 0
๐ 0
๐งจ Preprint alert
Is it easier to find a ball than a shoe? The answer lies in how variable we think these objects are in the real-world. www.biorxiv.org/content/10.6...
w/ the amazing @dkaiserlab.bsky.social & @luchunyeh.bsky.social ๐ฆ
๐งต1/8
09.02.2026 10:26 โ
๐ 20
๐ 8
๐ฌ 1
๐ 2
New PhD and post-doc job openings!
Join me and Prof. Nina Kazanina @ Uni Geneva, Switzerland, to take part in an exciting project on relations and binding in language and vision, explored with cutting-edge neurophysiology (#iEEG and MEG).
Full details in the job offer below.
30.01.2026 10:41 โ
๐ 28
๐ 18
๐ฌ 1
๐ 4
Finally out in eLife!!
"Early foveal cortex predicts the features of saccade targets through feedback from higher cortical areas."
elifesciences.org/articles/107...
26.01.2026 14:20 โ
๐ 30
๐ 11
๐ฌ 0
๐ 0
Poster advertising symposia on Frances Eganโs book โDeflating Mental Representationโ (13/04 - 15/04). More info: https://tinyurl.com/NMO-ISPSM and https://tinyurl.com/Phimisci-Egan
๐โจCall for papers and symposia on Frances Eganโs (@francesegan.bsky.social) โDeflating Mental Representationโ (13/04 - 15/04) alongside Neural Mechanisms Online and Philosophy and the Mind Sciences (@phimisci.bsky.social)!
More info: tinyurl.com/NMO-ISPSM and tinyurl.com/Phimisci-Egan
#philsky
21.01.2026 12:56 โ
๐ 15
๐ 9
๐ฌ 1
๐ 0
main goal for this year: find a new job! ๐
looking for a role with fun & complex technical challenges & within a great community. my main expertise is in signal processing/EEG/MEG, but topic-wise I am quite flexible.
science/industry both great! starting mid-year. nschawor.github.io/cv
16.01.2026 10:14 โ
๐ 102
๐ 66
๐ฌ 3
๐ 3
New preprint: Inference over hidden contexts shapes the geometry of conceptual knowledge for flexible behaviour.
In this pre-reg study, our core claim was that we donโt just learn stimulus-reward. We infer hidden context and that inference re-wires attention and neural state space on the fly.
1/8
08.01.2026 07:46 โ
๐ 36
๐ 16
๐ฌ 1
๐ 0
Dispute erupts over universal cortical brain-wave claim
The debate highlights opposing views on how the cortex transmits information.
A โuniversalโ pattern of cortical brain oscillations may be less ubiquitous than previously proposed.
By @claudia-lopez.bsky.social
#neuroskyence
www.thetransmitter.org/brain-waves/...
12.12.2025 14:20 โ
๐ 33
๐ 12
๐ฌ 1
๐ 1
Good postdoc opportunity โฌ๏ธ
09.12.2025 08:52 โ
๐ 2
๐ 0
๐ฌ 0
๐ 0
Lindsay Lab - Postdoc Position
Artificial neural networks applied to psychology, neuroscience, and climate change
Spread the word: I'm looking to hire a postdoc to explore the concept of attention (as studied in psych/neuro, not the transformer mechanism) in large Vision-Language Models. More details here: lindsay-lab.github.io/2025/12/08/p...
#MLSky #neurojobs #compneuro
08.12.2025 23:53 โ
๐ 125
๐ 91
๐ฌ 2
๐ 0
If you calculated noise ceilings (NC) based on split-half reliability - e.g. to compare models - this one is important!
Seems many published studies miscalculated it, overestimating model performance. First, let's make this crystal clear:
NC = 2*r / (1+r)
where r is split-half correlation.
08.12.2025 00:51 โ
๐ 22
๐ 4
๐ฌ 1
๐ 1
We recently stumbled upon a surprisingly common misunderstanding in computing noise ceilings that can be quite consequential. So if you care about noise ceilings, please check out Sanderโs thread and our preprint! ๐
05.12.2025 08:39 โ
๐ 18
๐ 5
๐ฌ 0
๐ 0
Thanks for sharing! I enjoyed this exchange. My apologies for citing your book in a suboptimal spot, that one's on us.
04.12.2025 18:58 โ
๐ 1
๐ 0
๐ฌ 1
๐ 0
OSF
6/ We'd like to thank all researchers who answered our questions on how noise ceilings were computed, and we thank several folks listed in the Acknowledgments for their discussions and efforts.
PDF:
osf.io/preprints/ps...
04.12.2025 18:53 โ
๐ 2
๐ 0
๐ฌ 0
๐ 0
5/ We also share some simulations and proofs to illustrate the core point.
Overall, our aim is to make the computation of noise ceilings more consistent across future work, and we share several tips for achieving this.
04.12.2025 18:53 โ
๐ 1
๐ 0
๐ฌ 1
๐ 0
4/ To this end, we offer a basic intuition with math & visuals to explain how reliability maps onto model performance. In a nutshell, split halves both contain measurement noise, but a True model does not; so the former is doubly attenuated, causing ceiling underestimation.
04.12.2025 18:53 โ
๐ 2
๐ 0
๐ฌ 1
๐ 0
3/ We analyzed the literature and found that about 60% of the sampled literature uses a mapping that makes models appear closer to ceiling than intended. The goal of this paper is to show the statistical underpinnings of why the above mappings follow.
04.12.2025 18:53 โ
๐ 1
๐ 0
๐ฌ 1
๐ 0
2/ The pitfall is that although this reliability-based ceiling is expressed as a correlation coefficient, it does not reflect a ceiling on model corr (r), but on model explained var. (Rยฒ)
Specifically:
Model metric Rยฒ maps onto reliability
Model metric r maps onto sqrt(reliability)
04.12.2025 18:53 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
1/ Noise ceilings are great because they index how much variance a model can in principle explain given noise in the data. A popular way to estimate them is by splitting the data in half, correlating these halves, and applying the Spearman-Brown correction
04.12.2025 18:53 โ
๐ 0
๐ 0
๐ฌ 1
๐ 0
OSF
New preprint w/ Malin Styrnal & @martinhebart.bsky.social
Have you ever computed noise ceilings to understand how well a model performs? We wrote a clarifying note on a subtle and common misapplication that can make models appear quite a lot better than they are.
osf.io/preprints/ps...
04.12.2025 18:53 โ
๐ 60
๐ 23
๐ฌ 1
๐ 4
Finally got the job adโlooking for 2 PhD students to start spring next year:
www.gao-unit.com/join-us/
If comp neuro, ML, and AI4Neuro is your thing, or you just nerd out over brain recordings, apply!
I'm at neurips. DM me here / on the conference app or email if you want to meet ๐๏ธ๐ฎ
03.12.2025 09:36 โ
๐ 81
๐ 51
๐ฌ 1
๐ 5
Thanks for sharing these. I think causality is what lies at the core of that TICS paper. I guess separating control & mechanistic explanation is more a statement about the use of our causal models; maybe both boil down to mapping causes and effects?
03.12.2025 08:51 โ
๐ 1
๐ 0
๐ฌ 0
๐ 0
Reply to โTop-down and bottom-up neuroscience as collections of practicesโ - Nature Reviews Neuroscience
Nature Reviews Neuroscience - Reply to โTop-down and bottom-up neuroscience as collections of practicesโ
@loopyluppi.bsky.social & @frosas.bsky.social have written a reply. I recommend reading this as it clarifies their stance and advances the discussion:
www.nature.com/articles/s41...
I think this was a fruitful exchange. It was also a great experience to write this up w/ David in Amsterdam @ CCN2025
02.12.2025 15:33 โ
๐ 2
๐ 1
๐ฌ 1
๐ 0
Top-down and bottom-up neuroscience as collections of practices
Nature Reviews Neuroscience - Top-down and bottom-up neuroscience as collections of practices
Despite these points, we find the precision/accuracy distinction a useful one.
Finally, our piece considers what targets might be the end-point of a precision/accuracy-first approach. We distinguish mechanistic explanation, prediction, and control
PDF: rdcu.be/eSKYI
02.12.2025 15:22 โ
๐ 2
๐ 0
๐ฌ 1
๐ 0
Third, we question the normative assumptions. Bottom-up is said to emphasize solid foundations & experimental control. We argue that these virtues are also embodied by top-down, just in a different form. And if this were not so, that would be a reason to consider approaches as not equally valid.
02.12.2025 15:13 โ
๐ 1
๐ 0
๐ฌ 1
๐ 0
Second, the distinction doesnโt always cut cleanly. Is Kandelโs work on Aplysia really precision-first, as suggested? From another angle, it looks accuracy-first: it operationalizes memory via an ambitious linking hypothesis.
02.12.2025 15:13 โ
๐ 2
๐ 0
๐ฌ 1
๐ 0
Our first point: this distinction collides with other accounts in the literature. We catalogue some of the diverse meanings and practices associated with "bottom-up" and "top-down" neuroscience.
02.12.2025 15:13 โ
๐ 8
๐ 2
๐ฌ 1
๐ 1
The critiqued paper outlines two research cultures:
A bottom-up, precision-first, approach that emphasizes control and iterative steps. And a top-down, accuracy-first approach that values coarse-grained analysis & tackling big questions head-on.
www.nature.com/articles/s41...
02.12.2025 15:13 โ
๐ 3
๐ 0
๐ฌ 1
๐ 0
Top-down and bottom-up neuroscience as collections of practices - Nature Reviews Neuroscience
Nature Reviews Neuroscience - Top-down and bottom-up neuroscience as collections of practices
New Correspondence with @davidpoeppel.bsky.social in Nat Rev Neurosci. www.nature.com/articles/s41...
Here, we critique a recent paper by Rosas et al. We argue that "Bottom-up" and "Top-down" neuroscience have various meanings in the literature.
PDF: rdcu.be/eSKYI
02.12.2025 15:13 โ
๐ 41
๐ 15
๐ฌ 1
๐ 1