Martin Hebart's Avatar

Martin Hebart

@martinhebart.bsky.social

Proud dad, Professor of Computational Cognitive Neuroscience, author of The Decoding Toolbox, founder of http://things-initiative.org our lab πŸ‘‰ https://hebartlab.com

4,523 Followers  |  542 Following  |  293 Posts  |  Joined: 04.09.2023  |  2.0571

Latest posts by martinhebart.bsky.social on Bluesky

four brain maps showing noise ceiling estimates in response to image presentation

four brain maps showing noise ceiling estimates in response to image presentation

New CNeuroMod-THINGS open-access fMRI dataset: 4 participants Β· ~4 000 images (720 categories) each shown 3Γ— (12k trials per subject)Β· individual functional localizers & NSD-inspired QC . Preprint: arxiv.org/abs/2507.09024 Congrats Marie St-Laurent and @martinhebart.bsky.social !!

30.07.2025 01:57 β€” πŸ‘ 34    πŸ” 16    πŸ’¬ 1    πŸ“Œ 0
Homepage

@tnm-lab.bsky.social KausalitΓ€tswahrnehmung, Erwartungen & fMRT begeistern dich?
Bewirb dich auf die Promotionsstelle (3 Jahre, 75%) in Marburg (www.theadaptivemind.de) bis zum 17.08.:
stellenangebote.uni-marburg.de/jobposting/1...
#Job #Promotion #fMRI #Psychologie

18.07.2025 05:51 β€” πŸ‘ 8    πŸ” 8    πŸ’¬ 0    πŸ“Œ 0
Human retinotopic mapping: From empirical to computational models of retinotopy | JOV | ARVO Journals

🧠✨ Excited to share that our literature review on retinotopic mapping in the human visual cortex is now published!
tinyurl.com/5d9ne68b

Amazing collab with Noah Benson and Alex Puckett! We hope this will be a helpful resource for pRF modellers and visual neuroscientists!

17.07.2025 04:56 β€” πŸ‘ 22    πŸ” 9    πŸ’¬ 1    πŸ“Œ 0
Preview
Sensory responses of visual cortical neurons are not prediction errors Predictive coding is theorized to be a ubiquitous cortical process to explain sensory responses. It asserts that the brain continuously predicts sensory information and imposes those predictions on lo...

1/3) This may be a very important paper, it suggests that there are no prediction error encoding neurons in sensory areas of cortex:

www.biorxiv.org/content/10.1...

I personally am a big fan of the idea that cortical regions (allo and neo) are doing sequence prediction.

But...

πŸ§ πŸ“ˆ πŸ§ͺ

11.07.2025 15:45 β€” πŸ‘ 210    πŸ” 76    πŸ’¬ 10    πŸ“Œ 4
Function over form: The temporal evolution of affordance-based scene categorization | JOV | ARVO Journals

🚨 New paper in Journal of Vision!
We show that scene affordancesβ€”what you can do in a spaceβ€”shape how we perceive and categorize scenes. This shapes your similarity preferences, predicts your categorization false alarms, and even alters neural representations. πŸ§΅πŸ‘‡
πŸ”— doi.org/10.1167/jov....
(1/7)

08.07.2025 16:38 β€” πŸ‘ 26    πŸ” 9    πŸ’¬ 3    πŸ“Œ 0

🚨 Preprint alert! Excited to share my second PhD project: β€œAdopting a human developmental visual diet yields robust, shape-based AI vision” -- a nice case showing that biology, neuroscience, and psychology can still help AI :)! arxiv.org/abs/2507.03168

08.07.2025 13:09 β€” πŸ‘ 11    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

2 more days to apply! Updated info here: drive.google.com/file/d/1WGEE...

08.07.2025 07:30 β€” πŸ‘ 4    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Preview
A meta-analytic review of child maltreatment and interoception Nature Mental Health - Interoception, the perception of internal bodily signals, is crucial for mental and physical well-being, yet the origins of disruptions in interoception are not well...

My first dissertation paper is out in Nature Mental Health! 🀩

With colleagues from @tudresden.bsky.social and @freieuniversitaet.bsky.social, we conducted a meta-analysis on the link between childhood trauma and interoception β€” our ability to sense internal bodily signals.

rdcu.be/eu8bo

1/🧡

07.07.2025 14:25 β€” πŸ‘ 130    πŸ” 46    πŸ’¬ 2    πŸ“Œ 4
Preview
Fast and robust visual object recognition in young children The visual recognition abilities of preschool children rival those of state-of-the-art artificial intelligence models.

My paper with @stellalourenco.bsky.social ‬is now out in Science Advances!

We found that children have robust object recognition abilities that surpass many ANNs. Models only outperformed kids when their training far exceeded what a child could experience in their lifetime

doi.org/10.1126/scia...

02.07.2025 19:38 β€” πŸ‘ 92    πŸ” 28    πŸ’¬ 2    πŸ“Œ 2

Go work with Rosanne!

01.07.2025 06:55 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

We’re currently looking into it! But since even CLIP didn’t perform that much better I guess it will take models with a stronger semantic bias to close the gap.

27.06.2025 12:52 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

🚨 Our new paper is out in @natmachintell.nature.com!
πŸ€” Deep neural nets (DNNs) often align with humans in performance and even representations β€” but do they "think" like us?
πŸ“„ β€œDimensions underlying the representational alignment of deep neural networks with humans”

25.06.2025 12:31 β€” πŸ‘ 7    πŸ” 3    πŸ’¬ 2    πŸ“Œ 0
Post image

Big year for our lab at #OHBM2025!

Thrilled to present an exciting mix of posters, talks, and lots of brainy fun πŸ§ πŸ€“

Come check us out! We’d love to connect!

@cognizelab.bsky.social
@ohbmofficial.bsky.social
#OHBM #OHBM2025 #Neuroimaging

25.06.2025 00:04 β€” πŸ‘ 11    πŸ” 5    πŸ’¬ 1    πŸ“Œ 1
Post image

@neocosmliang.bsky.social demonstrating a domain general neural signature that predicts perceived beauty of objects 🌹πŸ₯€ in collab with the amazing @martinhebart.bsky.social and @dkaiserlab.bsky.social #THINGS

25.06.2025 00:04 β€” πŸ‘ 11    πŸ” 8    πŸ’¬ 0    πŸ“Œ 1

Overall, we see this approach as a proof-of-principle for a framework that allows us to better compare human and AI representations. The limitations in alignment we revealed (e.g. the visual bias) open the door to new ways of evaluating or even improving their correspondence /fin

23.06.2025 20:02 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This means that DNNs in fact only appear to approximate the semantic nature of the dimensions found in humans but that this interpretation remains limited and doesn't reflect the full richness and specificity of representations found in humans. 14/n

23.06.2025 20:02 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

However, when looking at the differences, there were clear examples where the DNN dimension yielded low values when the human dimension was clearly high (e.g., animals in an β€œanimal-related”), or where the DNN dimension was high when it should be low (e.g., a shopping cart) 13/n

23.06.2025 20:02 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

However, what was still missing was a face-to-face comparison with humans. For that, we used the most human-AI aligned pairs of dimensions. At a first glance, again it seemed as if they reflected similar information 12/n

23.06.2025 20:02 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We also maximized the activity of individual dimensions, and we manipulated individual dimensions in images and observed selective changes on dimensions, all yielding consistent results. Overall, this should demonstrate that the dimensions are indeed highly interpretable? 11/n

23.06.2025 20:02 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We first created heatmaps of relevant image regions for individual image dimensions using GradCam. The results made sense: For example, an alleged technology dimension highlighted the button of a flashlight as informative about technology. 10/n

23.06.2025 20:02 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

OK, but despite these differences, it still appeared as if the DNN at least carried many meaningfully-interpretable dimensions. To put this to test, we ran multiple experiments on the DNN dimensions to probe their interpretability. 9/n

23.06.2025 20:02 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

What this means is that even at the highest level of representations, to solve similar tasks as humans, DNNs rely on very different information than humans. 8/n

23.06.2025 20:02 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

This visual bias wasn’t specific to VGG-16. We repeated the same dimension identification approach for other common network architectures and found a similar bias across all tested networks. 7/n

23.06.2025 20:02 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 3    πŸ“Œ 0
Post image

However, this comparison already revealed an important difference. When we quantified the proportion of visual, semantic, and mixed dimensions, it turned out that human representations were much more semantic, while the DNN was dominated by visual dimensions. 6/n

23.06.2025 20:02 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

Using the penultimate layer of the common VGG-16 architecture, we found 70 representational dimensions. At a first glance they appeared to be largely interpretable, reflecting visual and semantic object dimensions. Some of them even appeared to encode basic shape features. 5/n

23.06.2025 20:02 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Equivalently to humans, we can then infer the core representational dimensions from these similarity β€œjudgments”. This now enables us to directly compare representational dimensions in humans with those found in deep nets! So, what do we find? 4/n

23.06.2025 20:02 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

To address this, we adapted a recent approach in cognitive science for identifying core representational dimensions underlying human similarity judgments in an odd-one-out task. The trick: We treat a deep neural network representation as a human playing the odd-one-out game. 3/n

23.06.2025 20:02 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

How can we compare human and AI representation? A popular approach is studying representational similarities. But this method only informs about the *degree* of alignment. Without clear hypotheses, we do not know *what it actually is* that makes them similar or different. 2/n

23.06.2025 20:02 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Dimensions underlying the representational alignment of deep neural networks with humans - Nature Machine Intelligence An interpretability framework that compares how humans and deep neural networks process images has been presented. Their findings reveal that, unlike humans, deep neural networks focus more on visual ...

What makes humans similar or different to AI? In a paper out in @natmachintell.nature.com led by @florianmahner.bsky.social & @lukasmut.bsky.social, w/ Umut GΓΌclΓΌ, we took a deep look at the factors underlying their representational alignment, with surprising results.

www.nature.com/articles/s42...

23.06.2025 20:02 β€” πŸ‘ 90    πŸ” 32    πŸ’¬ 2    πŸ“Œ 1
Post image

A more personal note: my sister Alexandra is a professional singer in the Northern German Radio Choir. She has a great solo tonight at 8pm CET in the Hamburg Elbe philharmonic hall. If you like Italian baroque, tune in online:

www.ndr.de/orchester_ch...

20.06.2025 13:24 β€” πŸ‘ 13    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@martinhebart is following 20 prominent accounts