Excited! Check this out
7/ Practical implications: Language localizers using the sentences>nonwords contrast are robust to task variation. But if your localizer includes an active task, ensure the control condition is at least as difficult as the critical one, or you’ll mix language and MD networks.
6/ Conclusion: The language network is primarily input-driven. Although modestly modulated by task demands, its response profile and activation pattern remain stable across tasks. Task demands, however, engage the MD network.
5/ In the language network, we can reliably decode stimulus (sentences vs. nonwords), and more accurately than we can decode task.
In contrast, in the MD network, task is better decoded than stimulus type.
4/ Task demands do increase responses in the language network, but reading sentences accompanied by active tasks also strongly recruits the Multiple Demand (MD) network, which is sensitive to task demands.
3/ The activations are remarkably consistent within individuals across tasks (and, as reported before, variable across individuals).
2/ Across all six tasks, the language network is strongly engaged by the sentences > non-words contrast.
1/ We ran six versions of a language localizer, ranging from passive reading to sentiment judgments.
New preprint w/ @evfedorenko.bsky.social, @neuranna.bsky.social , Chandler Cheung, Matthew Siegelman, Alvincé Pongos, @hopekean.bsky.social , Alyx Tanner
A left frontal-temporal network selectively supports language comprehension and production. Are computations in this language network driven primarily by bottom-up input, or by top-down task demands?
🧵👇
www.biorxiv.org/content/10.6...
Try this out!
Many thanks to the volunteer organizers and the flash talk presenters for making the CCN watch party at Georgia Tech a success! And thanks all attendees for coming and engaging in discussions!
Looking forward to #CogSci2025 ! Find us throughout the conference
P.S. If you’re a Matlab user, you can try using the spm_ss toolbox developed by Alfonso (which we here adapted for Python+BIDS)
github.com/alfnie/spm_ss
Many thanks to @evfedorenko.bsky.social & Alfonso Nieto-Castañon for developing these methods in Fedorenko et al (2010) and in subsequent works!
journals.physiology.org/doi/full/10....
For a detailed demo with code examples—check out our step-by-step guide 👉 funroi.readthedocs.io/en/latest/ex...
Built to be BIDS-compliant, funROI ensures your data is organized & reproducible. 📁
funROI also provides a wrapper for #Nilearn ’s first-level modeling - Easily run GLM analyses with support for event-related & block designs, customizable hemodynamic responses, confound regression, and statistical contrasts.
3 - Effect Estimation: Quantify the strength of neural responses in your fROIs.
4 - Spatial Correlation: Compare within-subject activation patterns across conditions.
5 - Overlap Estimation: Measure spatial overlap between parcels or fROIs.
2 - fROI Definition: Define subject-specific functional ROIs by selecting the top % of active voxels within each parcel (or use fixed voxel counts/p-value thresholds).
Key features include:
1 - Parcel Generation: Create group parcels (brain masks) from individual activation maps with customizable smoothing & thresholds.
funROI leverages subject-specific functional localization to boost the sensitivity & accuracy of your analyses.
It is also easy to use.
Excited to introduce funROI: A Python package for functional ROI analyses of fMRI data!
funroi.readthedocs.io/en/latest/
#fMRI #Neuroimaging #Python #OpenScience
Work w @neuranna.bsky.social
🧵👇