four brain maps showing noise ceiling estimates in response to image presentation
New CNeuroMod-THINGS open-access fMRI dataset: 4 participants Β· ~4 000 images (720 categories) each shown 3Γ (12k trials per subject)Β· individual functional localizers & NSD-inspired QC . Preprint: arxiv.org/abs/2507.09024 Congrats Marie St-Laurent and @martinhebart.bsky.social !!
30.07.2025 01:57 β π 34 π 16 π¬ 1 π 0
Homepage
@tnm-lab.bsky.social KausalitΓ€tswahrnehmung, Erwartungen & fMRT begeistern dich?
Bewirb dich auf die Promotionsstelle (3 Jahre, 75%) in Marburg (www.theadaptivemind.de) bis zum 17.08.:
stellenangebote.uni-marburg.de/jobposting/1...
#Job #Promotion #fMRI #Psychologie
18.07.2025 05:51 β π 8 π 8 π¬ 0 π 0
Human retinotopic mapping: From empirical to computational models of retinotopy | JOV | ARVO Journals
π§ β¨ Excited to share that our literature review on retinotopic mapping in the human visual cortex is now published!
tinyurl.com/5d9ne68b
Amazing collab with Noah Benson and Alex Puckett! We hope this will be a helpful resource for pRF modellers and visual neuroscientists!
17.07.2025 04:56 β π 22 π 9 π¬ 1 π 0
Sensory responses of visual cortical neurons are not prediction errors
Predictive coding is theorized to be a ubiquitous cortical process to explain sensory responses. It asserts that the brain continuously predicts sensory information and imposes those predictions on lo...
1/3) This may be a very important paper, it suggests that there are no prediction error encoding neurons in sensory areas of cortex:
www.biorxiv.org/content/10.1...
I personally am a big fan of the idea that cortical regions (allo and neo) are doing sequence prediction.
But...
π§ π π§ͺ
11.07.2025 15:45 β π 210 π 76 π¬ 10 π 4
Function over form: The temporal evolution of affordance-based scene categorization | JOV | ARVO Journals
π¨ New paper in Journal of Vision!
We show that scene affordancesβwhat you can do in a spaceβshape how we perceive and categorize scenes. This shapes your similarity preferences, predicts your categorization false alarms, and even alters neural representations. π§΅π
π doi.org/10.1167/jov....
(1/7)
08.07.2025 16:38 β π 26 π 9 π¬ 3 π 0
π¨ Preprint alert! Excited to share my second PhD project: βAdopting a human developmental visual diet yields robust, shape-based AI visionβ -- a nice case showing that biology, neuroscience, and psychology can still help AI :)! arxiv.org/abs/2507.03168
08.07.2025 13:09 β π 11 π 2 π¬ 0 π 0
2 more days to apply! Updated info here: drive.google.com/file/d/1WGEE...
08.07.2025 07:30 β π 4 π 3 π¬ 1 π 0
A meta-analytic review of child maltreatment and interoception
Nature Mental Health - Interoception, the perception of internal bodily signals, is crucial for mental and physical well-being, yet the origins of disruptions in interoception are not well...
My first dissertation paper is out in Nature Mental Health! π€©
With colleagues from @tudresden.bsky.social and @freieuniversitaet.bsky.social, we conducted a meta-analysis on the link between childhood trauma and interoception β our ability to sense internal bodily signals.
rdcu.be/eu8bo
1/π§΅
07.07.2025 14:25 β π 130 π 46 π¬ 2 π 4
Fast and robust visual object recognition in young children
The visual recognition abilities of preschool children rival those of state-of-the-art artificial intelligence models.
My paper with @stellalourenco.bsky.social β¬is now out in Science Advances!
We found that children have robust object recognition abilities that surpass many ANNs. Models only outperformed kids when their training far exceeded what a child could experience in their lifetime
doi.org/10.1126/scia...
02.07.2025 19:38 β π 92 π 28 π¬ 2 π 2
Go work with Rosanne!
01.07.2025 06:55 β π 3 π 2 π¬ 0 π 0
Weβre currently looking into it! But since even CLIP didnβt perform that much better I guess it will take models with a stronger semantic bias to close the gap.
27.06.2025 12:52 β π 2 π 0 π¬ 0 π 0
π¨ Our new paper is out in @natmachintell.nature.com!
π€ Deep neural nets (DNNs) often align with humans in performance and even representations β but do they "think" like us?
π βDimensions underlying the representational alignment of deep neural networks with humansβ
25.06.2025 12:31 β π 7 π 3 π¬ 2 π 0
Big year for our lab at #OHBM2025!
Thrilled to present an exciting mix of posters, talks, and lots of brainy fun π§ π€
Come check us out! Weβd love to connect!
@cognizelab.bsky.social
@ohbmofficial.bsky.social
#OHBM #OHBM2025 #Neuroimaging
25.06.2025 00:04 β π 11 π 5 π¬ 1 π 1
@neocosmliang.bsky.social demonstrating a domain general neural signature that predicts perceived beauty of objects πΉπ₯ in collab with the amazing @martinhebart.bsky.social and @dkaiserlab.bsky.social #THINGS
25.06.2025 00:04 β π 11 π 8 π¬ 0 π 1
Overall, we see this approach as a proof-of-principle for a framework that allows us to better compare human and AI representations. The limitations in alignment we revealed (e.g. the visual bias) open the door to new ways of evaluating or even improving their correspondence /fin
23.06.2025 20:02 β π 3 π 0 π¬ 0 π 0
This means that DNNs in fact only appear to approximate the semantic nature of the dimensions found in humans but that this interpretation remains limited and doesn't reflect the full richness and specificity of representations found in humans. 14/n
23.06.2025 20:02 β π 2 π 0 π¬ 1 π 0
However, when looking at the differences, there were clear examples where the DNN dimension yielded low values when the human dimension was clearly high (e.g., animals in an βanimal-relatedβ), or where the DNN dimension was high when it should be low (e.g., a shopping cart) 13/n
23.06.2025 20:02 β π 0 π 0 π¬ 1 π 0
However, what was still missing was a face-to-face comparison with humans. For that, we used the most human-AI aligned pairs of dimensions. At a first glance, again it seemed as if they reflected similar information 12/n
23.06.2025 20:02 β π 0 π 0 π¬ 1 π 0
We also maximized the activity of individual dimensions, and we manipulated individual dimensions in images and observed selective changes on dimensions, all yielding consistent results. Overall, this should demonstrate that the dimensions are indeed highly interpretable? 11/n
23.06.2025 20:02 β π 0 π 0 π¬ 1 π 0
We first created heatmaps of relevant image regions for individual image dimensions using GradCam. The results made sense: For example, an alleged technology dimension highlighted the button of a flashlight as informative about technology. 10/n
23.06.2025 20:02 β π 2 π 0 π¬ 1 π 0
OK, but despite these differences, it still appeared as if the DNN at least carried many meaningfully-interpretable dimensions. To put this to test, we ran multiple experiments on the DNN dimensions to probe their interpretability. 9/n
23.06.2025 20:02 β π 1 π 0 π¬ 1 π 0
What this means is that even at the highest level of representations, to solve similar tasks as humans, DNNs rely on very different information than humans. 8/n
23.06.2025 20:02 β π 3 π 0 π¬ 1 π 0
This visual bias wasnβt specific to VGG-16. We repeated the same dimension identification approach for other common network architectures and found a similar bias across all tested networks. 7/n
23.06.2025 20:02 β π 3 π 0 π¬ 3 π 0
However, this comparison already revealed an important difference. When we quantified the proportion of visual, semantic, and mixed dimensions, it turned out that human representations were much more semantic, while the DNN was dominated by visual dimensions. 6/n
23.06.2025 20:02 β π 3 π 1 π¬ 1 π 0
Using the penultimate layer of the common VGG-16 architecture, we found 70 representational dimensions. At a first glance they appeared to be largely interpretable, reflecting visual and semantic object dimensions. Some of them even appeared to encode basic shape features. 5/n
23.06.2025 20:02 β π 1 π 0 π¬ 1 π 0
Equivalently to humans, we can then infer the core representational dimensions from these similarity βjudgmentsβ. This now enables us to directly compare representational dimensions in humans with those found in deep nets! So, what do we find? 4/n
23.06.2025 20:02 β π 2 π 0 π¬ 1 π 0
To address this, we adapted a recent approach in cognitive science for identifying core representational dimensions underlying human similarity judgments in an odd-one-out task. The trick: We treat a deep neural network representation as a human playing the odd-one-out game. 3/n
23.06.2025 20:02 β π 2 π 0 π¬ 1 π 0
How can we compare human and AI representation? A popular approach is studying representational similarities. But this method only informs about the *degree* of alignment. Without clear hypotheses, we do not know *what it actually is* that makes them similar or different. 2/n
23.06.2025 20:02 β π 3 π 0 π¬ 1 π 0
A more personal note: my sister Alexandra is a professional singer in the Northern German Radio Choir. She has a great solo tonight at 8pm CET in the Hamburg Elbe philharmonic hall. If you like Italian baroque, tune in online:
www.ndr.de/orchester_ch...
20.06.2025 13:24 β π 13 π 0 π¬ 0 π 0
Simons Postdoctoral Fellow in Pawan Sinha's Lab at MIT. Experimental and computational approaches to vision, time, and development. Just joined Bluesky!
PhD Student @FU_Berlin co-supervised by Prof. Radoslaw M. Cichy and Prof. Tim Kietzmann, interested in machine learning and cognitive science.
cognitive neuroscience | early life adversity | defensive circuits | metaresearch. phd @SysNeuroHamburg @lonsdorflab.
medical student @charitΓ©.
psychologist.
The goal of our research is to understand how brain states shape decision-making, and how this process goes awry in certain neurological & psychiatric disorders
| tobiasdonner.net | University Medical Center Hamburg-Eppendorf, Germany
devel.app Founder, ELLIS Scholar, Donders PI & Computable Laureateβbridging Neuroscience & AI to reverse-engineer the brain w/ my pups, Kara & Kuzu πΎ
PhD student @mps-cognition & BCCN berlin | vision, neuroscience & the easy problems of consciousness | @mesec-community cofounder | πͺπΊ ππ·
Neuroscience, mathematics, psychedelics (in any order).
Lead 1st psychedelic fMRI in Amsterdam.
Founder of the Amsterdam Psychedelic Research Association.
PhD.
Cognitive neuroscientist | PI @ COGNIZELab | Prof @ ISTBI, Fudan, Shanghai | Exploring memory encoding, Default Mode Network and mental navigation with 7T fMRI @cognizelab.bsky.social
International Max Planck Research School on Cognitive NeuroImaging | MPI CBS Leipzig | Leipzig University | TU Dresden | UCL | Focusing on: Cognitive Neuroscience, Clinical and Translational Neuroscience, Development of Neuroimaging and Modeling Methods
she/her
Cognitive Neuroscience PhD candidate π§
Univ of Granada/CIMCYC https://cimcyc.ugr.es/en
RamΓ³n y Cajal researcher at the CIMCYC (Universidad de Granada, Spain).
Currently migrating personal site to: https://ortiztudela.github.io/ortiztudela/
Flatiron Research Fellow #FlatironCCN. PhD from #mitbrainandcog. Incoming Asst Prof #CarnegieMellon in Fall 2025. I study how humans and computers hear and see.
Asst. Prof. at the University of Bonn in Germany. Research focus: neural basis of navigation and memory in humans
Cognitive Neuroscientist and Assistant Professor of Psychology at George Mason University. Perception of Time, Memory, & Action. Exec Director @ http://timingforum.org
Neuroscientist and connectome researcher. Human & chimpanzee brain connectivity / microstructure using diffusion MRI. Learning & brain plasticity @ MPI CBS, Leipzig
Cognitive Neuroscientist, PI of the Neural Codes of Intelligence Research Group @ Max Planck Institute for Empirical Aesthetics
Cognitive computational neuroscientist, diver & traveller
PI at ATR Institute International (Japan)
Shared account: Biological Psychology and Neuropsychology section of the German Psychological Society (DGPs) and German Society for Psychophysiology (DGPA)
https://www.dgps.de/fachgruppen/fgbi/
https://www.dgpa.de/index.php