Amazing news, Alex! Huge congrats, and very well deserved!
09.12.2025 21:50 β π 1 π 0 π¬ 0 π 0@martinhebart.bsky.social
Proud dad, Professor of Computational Cognitive Neuroscience, author of The Decoding Toolbox, founder of http://things-initiative.org our lab π https://hebartlab.com
Amazing news, Alex! Huge congrats, and very well deserved!
09.12.2025 21:50 β π 1 π 0 π¬ 0 π 0@cimcyc.bsky.social is hiring!
SIX postdoc positions are coming up to dive into collaborative projects bridging together psychological science.
Amazing opportunity to boost a postdoc career in a cutting-edge research center with outstanding human teams!
ππ½
cimcyc.ugr.es/en/informati...
Very thoughtful thread on why it matters to compute the right noise ceiling & why communication is so important to prevent this issue from spreading. Kudos to Sam for being so transparent!
In brief:
NC for best R^2 == data reliability expressed as r
NC for best r == sqrt(reliability)
We recently stumbled upon a surprisingly common misunderstanding in computing noise ceilings that can be quite consequential. So if you care about noise ceilings, please check out Sanderβs thread and our preprint! π
05.12.2025 08:39 β π 18 π 5 π¬ 0 π 0New preprint w/ Malin Styrnal & @martinhebart.bsky.social
Have you ever computed noise ceilings to understand how well a model performs? We wrote a clarifying note on a subtle and common misapplication that can make models appear quite a lot better than they are.
osf.io/preprints/ps...
Super happy to announce that our Research Training Group "PIMON" is funded by the @dfg.de ! Starting in October, we will have exciting opportunities for PhD students that want to explore object and material perception & interaction in GieΓen @jlugiessen.bsky.social ! Just look at this amazing team!
03.12.2025 12:46 β π 31 π 5 π¬ 1 π 1New Correspondence with @davidpoeppel.bsky.social in Nat Rev Neurosci. www.nature.com/articles/s41...
Here, we critique a recent paper by Rosas et al. We argue that "Bottom-up" and "Top-down" neuroscience have various meanings in the literature.
PDF: rdcu.be/eSKYI
Investigating individual-specific topographic organization has traditionally been a resource-intensive and time-consuming process. But what if we could map visual cortex organization in thousands of brains? Here we offer the community with a toolbox that can do just that! tinyurl.com/deepretinotopy
01.12.2025 11:26 β π 76 π 37 π¬ 4 π 1(3) The community can now start to apply Fernanda's tool retrospectively to countless existing anatomical scans to investigate how individual differences in retinotopic organization relate to measures of individual differences in function.
Really curious to hear how the community receives this!
(2) From a more practical point of view, depending on the goal of a study and the required fidelity, we can now confidently say that we may no longer need to collect retinotopic mapping data, freeing up scan time for other tasks. 3/4
01.12.2025 13:51 β π 2 π 0 π¬ 1 π 0(1) If we can accurately predict individual-specific function from structure alone, this highlights that brain structure can act as a strong constraint to brain function in normally-developing individuals. To me, this offers a new paradigm for studying how structure and function are related. 2/4
01.12.2025 13:51 β π 0 π 0 π¬ 1 π 0Really excited to see this preprint out! Fernanda did an amazing job at demonstrating how you can accurately predict retinotopy from T1w scans alone. This is important for several reasons: 1/4
01.12.2025 13:51 β π 18 π 4 π¬ 1 π 0Weβd love your feedback on BERG (github.com/gifale95/BERG): pretrained encoding models + a Python toolkit for generating in silico neural responses for in silico experimentation. Your input will make BERG more useful and reliable!
forms.gle/pybrqcaqdso2...
#NeuroAI #CompNeuro #neuroscience #AI
Itβs not too late to apply for the PhD position in my lab! Please send your documents (cover letter, CV, transcripts, names of references) through the official application platform by Nov 25!
24.11.2025 08:55 β π 9 π 13 π¬ 0 π 0Huge congrats to Philipp Kaniuth for successfully defending his PhD summa cum laude (with distinction) βon the measurement of representations and similarityβ! Phil was my first PhD candidate, so itβs a particularly special event for me, and he can be very proud of his achievements!
20.11.2025 18:16 β π 41 π 2 π¬ 1 π 0I donβt see the inverted illusion. Is the line crossing the others supposed to be perceived as being closer?
12.11.2025 08:13 β π 0 π 0 π¬ 1 π 0Maybe to avoid confusion another P.S.: of course noise ceilings *can* indicate data quality and when they are high they usually *do* indicate high quality. But you have to look at the whole package and take all factors into account to make that judgment, and itβs hard to compare across datasets.
08.11.2025 10:23 β π 0 π 0 π¬ 0 π 0I hope this thread was interesting or useful! I'd also like to highlight a great paper related to example 3 by Kendrick Kay:
journals.plos.org/plosone/arti...
P.S.: the same issue extends to judging absolute variance explained by your model. π
P.P.S.: no AI was involved in making this thread. π
As these examples show, noise ceilings can (if any) only give you a rough idea about data quality. It is nontrivial to compare datasets or even pipelines on the same dataset to judge which one is better.
Noise ceilings have one purpose: To tell you how well your model can possibly do on this data.
Example 3: Assume 2 datasets:
Dataset A.
Dataset B, which is dataset A after denoising.
Our algorithm is great: it isolates a signal component, throws out all noise, but: it also removes all other signal!
Dataset B is now mostly pure signal & has extraordinary noise ceilings. Which one is better?
Now it becomes harder to shift the goalpost because there are so many things that can change between two datasets!
But you might now argue that at least for two datasets with the same parameters and the same number of trials, we can take noise ceilings as an index of relative data quality?
Example 2: We have 2 almost identical fMRI datasets but:
Dataset A has 2mm^3 resolution.
Dataset B has 4mm^3 resolution.
Dataset B has much higher noise ceilings. Which one is better?
Dataset A has lower SNR per voxel. But that's intentional. Downsampling would prob. yield a benefit for dataset A?
Now, you may argue we are comparing apples and oranges, and we can just use 2 repeats in dataset A to compare them.
But (1) now you agreed the noise ceiling is not an absolute index of quality, and (2) for your goals, a dataset with 5,000 unique images might actually be better than one with 100? π
Example 1: Assume we have two fMRI datasets:
Dataset A: 12 sessions, with 100 images each shown 100x.
Dataset B: 12 sessions, with 5,000 images each shown 2x.
Dataset A obviously has almost perfect noise ceilings, dataset B's ceilings are much lower. Is the data quality of dataset A higher?
You might say: Wait, but the term noise ceiling implies that they tell you something about the signal-to-noise ratio in the data? So this means, less noise = better data quality?
In the following, I'll use three examples to highlight why it isn't that simple.
Noise ceilings are really useful: You can estimate the reliability of your data and get an index of how well your model can possibly perform given the noise in the data.
But, contrary to what you may think, noise ceilings do not provide an absolute index of data quality.
Let's dive into why. π§΅
View from your office onto Giessen and surrounding villages.
Please repost! I am looking for a PhD candidate in the area of Computational Cognitive Neuroscience to start in early 2026.
The position is funded as part of the Excellence Cluster "The Adaptive Mind" at @jlugiessen.bsky.social.
Please apply here until Nov 25:
www.uni-giessen.de/de/ueber-uns...
*Neurocomputational architecture for syntax/learning*
Neuroscience & Philo Salon: join our discussion with @elliot-murphy.bsky.social with commentaries by @wmatchin.bsky.social and @sandervanbree.bsky.social
Nov 5, 10:30 am eastern US
Register:
umd.zoom.us/my/luizpesso...
#neuroskyence
Introducing CorText: a framework that fuses brain data directly into a large language model, allowing for interactive neural readout using natural language.
tl;dr: you can now chat with a brain scan π§ π¬
1/n
Large-scale similarity ratings of 768 short action videos uncover 28 interpretable dimensionsβsuch as interaction, sport, and craftβoffering a framework to quantify and compare human actions.
@martinhebart.bsky.social
www.nature.com/articles/s44...