NSD-synthetic, the out-of-distribution companion dataset of NSD consisting of 7T fMRI responses to 284 artificial images, is now published.
#NeuroAI #CompNeuro #neuroscience #AI
doi.org/10.1038/s414...
NSD-synthetic, the out-of-distribution companion dataset of NSD consisting of 7T fMRI responses to 284 artificial images, is now published.
#NeuroAI #CompNeuro #neuroscience #AI
doi.org/10.1038/s414...
This thread aligns closely with the core claim of our paper.
While naturalistic stimuli are highly valuable, large-scale natural data can yield spurious successes due to unintended shortcuts in complex analysis pipelines.
doi.org/10.1016/j.ne...
Our paper is now accepted at Neural Networks!
This work builds on our previous threads in X, updated with deeper analyses.
We revisit brain-to-image reconstruction using NSD + diffusion modelsβand ask: do they really reconstruct what we perceive?
Paper: doi.org/10.1016/j.ne...
π§΅1/12
And hereβs an experimental podcast-style of paper summary, generated via Notebook LM directed by me!
Link: notebooklm.google.com/notebook/9c8...
This project wouldnβt have been done without the support of all our lab members.
Huge thanks to co-authors, and especially to Prof. Kamitani ( @ykamit.bsky.social), for their invaluable support throughout this work!
Our paper goes further to formal analysis βincluding mathematical analysis, simulations, analysis of AI model representations, evaluation pitfalls, and meta-level insights into βrealisticβ reconstruction.
If this thread sparked your interest, please take a look at our paper!
NSDβs image diversity is smaller than expected, but this doesnβt diminish its value. New datasets like NSD-synthetic (arxiv.org/abs/2503.06286) and NSD-imagery (www.arxiv.org/abs/2506.06898) will also be valuable. Yet, we should consider data splits that align with your research goals.
13.06.2025 09:22 β π 0 π 0 π¬ 1 π 0
So, how should we interpret these reconstruction methods? We argue theyβre better understood as visualizations of decoded content, not true reconstructions.
Visualization itself also has value, but itβs crucial to recognize the huge gap between visualization and reconstruction.
Taken together, our results suggest recent diffusion-based reconstructions are a mix of classification into trained categories and hallucination by generative AIs.
This deviates fundamentally from genuine visual reconstruction, which aims to recover arbitrary visual experiences.
What about the Generator (diffusion model)?
We fed it true image features instead of predicted ones.
The outputs were semantically similarβbut perceptually quite different.
It seems the Generator relies mainly on semantic features, with less focus on perceptual fidelity.
Given the overlap between training/test sets, can the Translator predict test stimuli effectively?
Careful identification analyses revealed a fundamental limitation in generalizing beyond the training distribution.
Translator, though a regressor, behaves more like a classifier.
We first check Latent features. UMAP visualization of NSDβs CLIP features revealed (A):
- distinct clusters (~40)
- substantial overlap between training and test sets
NSD test images were also perceptually similar to training images (B), unlike in carefully curated Deeprecon (C).
To better understand what was happening, we decomposed these methods into a TranslatorβGenerator pipeline.
The Translator maps brain activity to the Latent features, and the Generator converts those features into images.
We analyzed each component in detail.
We tested whether these methods generalize beyond NSD.
They worked well on NSD (A), but performance severely dropped on Deeprecon (B).
The latest MindEye2 even generated training-set categories unrelated to test stimuli.
So whatβs behind this generalization failure?
βReconstructionβ is often seen as recovering any instance from a space of interest.
Prior works (e.g., Miyawaki+ 2008, Shen+ 2019) pursued this goal.
Recent studies report realistic reconstructions from NSD using CLIP + diffusion models.
Butβdo they truly achieve this goal?
Our paper is now accepted at Neural Networks!
This work builds on our previous threads in X, updated with deeper analyses.
We revisit brain-to-image reconstruction using NSD + diffusion modelsβand ask: do they really reconstruct what we perceive?
Paper: doi.org/10.1016/j.ne...
π§΅1/12
Yukiyasu Kamitani, Misato Tanaka, Ken Shirakawa
Visual Image Reconstruction from Brain Activity via Latent Representation
https://arxiv.org/abs/2505.08429
One big issue with some of the previous claims are that NSD, the massive 7T fMRI dataset of 1000s of images, might not be the right dataset to test these hypotheses. The reason is that it is built on MSCoCo and has too high similarity between training and test. arxiv.org/abs/2405.10078 16/n
11.12.2024 22:18 β π 6 π 1 π¬ 1 π 0Iβm currently concerned about what the brainβs encoding model predicts. Given that the target brain state is collected under naturalistic condition and the inputs of encoding model derived from a deep neural network, I am not sure what the predictions are actually represent.
16.11.2024 14:08 β π 0 π 0 π¬ 0 π 0