pRF fitting toolbox wish list
With this form we are taking stock of the field's wishes when it comes to pRF fitting software implementations. We will be presenting the results from this form in our kick-off meeting, and will use t...
Hey everyone at @vssmtg.bsky.social! If youโre interested in pRF fitting, go visit Garikoitz Lerma-Usabiagaโs poster on pRF fitting methods!
For our development of these tools, weโre very interested to hear you want in these tools. Please fill out our questionnaire:
forms.gle/fx5UMs1362jv...
17.05.2025 12:03 โ ๐ 9 ๐ 5 ๐ฌ 0 ๐ 0
15.03.2025 03:44 โ ๐ 4688 ๐ 807 ๐ฌ 56 ๐ 42
๐๐ฆ๐ณ๐ด๐ฐ๐ฏ, ๐ด๐ต๐ข๐ฏ๐ฅ๐ช๐ฏ๐จ ๐ฐ๐ฏ ๐ต๐ฉ๐ฆ ๐ฃ๐ฆ๐ข๐ค๐ฉ ๐ฅ๐ฐ๐ช๐ฏ๐จ ๐บ๐ฐ๐จ๐ข, ๐ด๐ต๐ข๐ฏ๐ฅ๐ช๐ฏ๐จ ๐ฐ๐ฏ ๐ฐ๐ฏ๐ฆ ๐ง๐ฐ๐ฐ๐ต, ๐ธ๐ช๐ต๐ฉ ๐ฐ๐ฏ๐ฆ ๐ฉ๐ข๐ฏ๐ฅ ๐ฐ๐ฏ ๐ต๐ฉ๐ฆ ๐จ๐ณ๐ฐ๐ถ๐ฏ๐ฅ, ๐ค๐ฐ๐ฏ๐ต๐ฐ๐ณ๐ต๐ฆ๐ฅ ๐ช๐ฏ ๐ข๐ฏ ๐ถ๐ฏ๐ถ๐ด๐ถ๐ข๐ญ ๐ข๐ฏ๐ฅ ๐ค๐ฉ๐ข๐ญ๐ญ๐ฆ๐ฏ๐จ๐ช๐ฏ๐จ ๐ฑ๐ฐ๐ด๐ฆ, ๐ธ๐ฉ๐ช๐ญ๐ฆ ๐ค๐ฐ๐ฏ๐ต๐ฆ๐ฎ๐ฑ๐ญ๐ข๐ต๐ช๐ฏ๐จ ๐ต๐ฉ๐ฆ ๐ช๐ฅ๐ฆ๐ข ๐ต๐ฉ๐ข๐ต ๐จ๐ฆ๐ฏ๐ฆ๐ณ๐ข๐ต๐ช๐ท๐ฆ ๐๐ ๐ฉ๐ข๐ด โ๐ธ๐ฐ๐ณ๐ญ๐ฅ ๐ฎ๐ฐ๐ฅ๐ฆ๐ญ๐ดโ
16.12.2024 01:00 โ ๐ 120 ๐ 21 ๐ฌ 13 ๐ 3
PNAS
Proceedings of the National Academy of Sciences (PNAS), a peer reviewed journal of the National Academy of Sciences (NAS) - an authoritative source of high-impact, original research that broadly spans...
A thread motivated by a new paper on body representations in the human brain at a fine-grained (multi-unit) level, spearheaded by J Garcia Ramirez, T Theys, and P Janssen, where I was a small part of a bigger collaboration that also included S Bracci, R Murty and @nancykanwisher.bsky.social. 1/n
13.12.2024 16:15 โ ๐ 14 ๐ 2 ๐ฌ 1 ๐ 0
And people sampling the videos with their eyes allows them to shape their own brain responses. This will likely generate an additional level of โindividualityโ to brain responses, lowering ISC
11.12.2024 15:46 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0
Our results indicate the brain uses aligned, 'multiplexed' topographic maps to structure connections between vision and somatosensation. The computational machinery classically attributed to the somatosensory system is embedded within/aligned with that of the "visual" system. ๐งต
03.12.2024 15:13 โ ๐ 3 ๐ 0 ๐ฌ 0 ๐ 0
These findings complement recent work indicating that dorsolateral visual cortex is a fundamentally multi-sensory part of the brain whose role extends beyond passive visual analysis to encompass semantic and bodily information relevant to interactions with the world. 20/n
03.12.2024 15:13 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0
These encoding model fits revealed a new map of visual body-part selectivity, which overlapped with somatotopic tuning across the FBA, EBA and, strikingly, the visual word form area (VWFA). 19/n
03.12.2024 15:13 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
To address this, we combined the Natural Scenes Dataset with a pose-detection algorithm fit a body-part tuning encoding model. This allowed us to generate a map of visual body part preference, organised along a similar toe-to-tongue axis as the somatotopic connectivity maps. 18/n
03.12.2024 15:13 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
Much of visual cortex is body-part selective. If this tuning relates to our somatotopic connectivity, we should also be able to predict visual body part selectivity from somatotopic tuning and reveal multi-modal body-referenced alignment playing out at more semantic levels. 17/n
03.12.2024 15:13 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
We did indeed find evidence for an alignment between visual field tuning and body part tuning beyond that expected by chance. We found this mostly dorsally and in the superior portion of EBA. 16/n
03.12.2024 15:13 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
But do these bodily maps predict anything about visual function? For instance, could lower body part tuning (e.g. toes) predict lower visual field tuning? Such an alignment might facilitate interactions with the environment. 15/n
03.12.2024 15:13 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
Yes! Throughout dorsolateral visual cortex, we see several body-part gradients separated by reversals. These maps were consistent across hemispheres and subject splits. 14/n
03.12.2024 15:13 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
But what about the body part tuning of these somatotopic activations? Do these dorsolateral regions exhibit orderly gradients, as found in 'core' somatosensory regions around the central sulcus? The answer is... 13/n
03.12.2024 15:13 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
We then repeated our somatosensory connectivity analyses separately on a movie section involving human agents and another without any humans. This demonstrated that somatotopic responses are not generic, but driven by movie content, specifically that featuring human action. 12/n
03.12.2024 15:13 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0
Our analysis allows us to contrast somatotopic and retinotopic explained variance. All dorsolateral (but not ventral!) visual regions were characterised by multimodal topographic connectivity. These regions care as much or more about the body as they do the visual scene! 11/n
03.12.2024 15:13 โ ๐ 1 ๐ 1 ๐ฌ 1 ๐ 0
We find that movie watching led to increased somatotopic connectivity in the somatosensory network outlined above. But strikingly, we now also find that dorsolateral visual cortex has structured connectivity with S1. Look at that red band across visual cortex! 10/n
03.12.2024 15:13 โ ๐ 2 ๐ 1 ๐ฌ 2 ๐ 0
So, we turned to the HCP movie watching experiment. This dataset allows us to investigate the relation of somatosensory connectivity to naturalistic visual experiences, where mental content is yoked to a visual stimulus. 9/n
03.12.2024 15:13 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
So, during resting state, endogenous activations throughout frontal, parietal, and insular cortex resonate along scaffolding provided by the somatotopic structure of bodily sensations. But how this resonance relates to mental content in resting state is unclear... 8/n
03.12.2024 15:13 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
Body part tuning revealed multiple somatotopic gradients and body-part tuning biases that are typically only revealed by exogenous stimulation (e.g. brushing people). Critically, we show that these same detailed principles can be revealed in the absence of sensory input! 7/n
03.12.2024 15:13 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
Since this method uses one part of the brain to explain another, we can fit these models to any data - even with no stimulus! Analysing 7T resting state data from the HCP, we found signals in a large cortical network were predicted by somatotopic connectivity with S1. 6/n
03.12.2024 15:13 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
This allows us to 'project' the visual field or body part tuning of V1 and S1 onto the rest of the brain. Weโre performing connectivity-derived retinotopic and somatotopic mapping, which allows us to find higher-level sensory maps throughout the brain. 5/n
03.12.2024 15:13 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
Our computational model of functional connectivity explains target voxelsโ timecourses as 'connective fields' located on the surfaces of both V1 and S1. Due to their specific locations on our primary sensory cortices, connective fields inherit visual and somatotopic tuning. 4/n
03.12.2024 15:13 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
These findings raise the question how the brain connects the computational machinery of vision and touch. Here, we developed a model for measuring joint visual and somatosensory tuning throughout the brain and applied it to resting state and movie watching data. 3/n
03.12.2024 15:13 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
When seeing bodily experiences, our brain responds 'as if' it simulates the observed tactile experience as our own. We know that e.g. viewing fingers being touched can activate finger-selective regions of primary somatosensory cortex (S1) (tinyurl.com/vissom3b) 2/n
03.12.2024 15:13 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
Since I need a first bsky post, here's a rehash of @nickhedger.bsky.social's thread announcing our preprint with Kendrick Kay & Thomas Naselaris. TLDR; High-level visual cortex is tiled with maps 'multiplexing' vision and touch. tinyurl.com/seesoma ๐งต 1/n
03.12.2024 15:13 โ ๐ 9 ๐ 6 ๐ฌ 2 ๐ 0
I can confirm your #10: lowering the sampling rate from 1 or 2 kHz to 500 or even 250 can drastically reduce pixel noise!
02.12.2024 22:12 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0
Ars Technica's science editor. Various other things that are far less interesting.
Check my author profile on Ars for my email. jtimmer.95 on Signal.
Wetenschapsjournalist | Schrijft voor o.m.
Volkskrant en universiteitsbladen | Boeken: Sloppy Science (2025), Hack je Hersenen (2023) | Schrijfdocent o.a. @Radboud Uni | Bioloog | Neuroscience PhD
staff writer @theatlantic.com and senior fellow @snfagora.bsky.social. author of GULAG, IRON CURTAIN, RED FAMINE, TWILIGHT OF DEMOCRACY and AUTOCRACY INC
https://linktr.ee/anneapplebaum
Postdoctoral researcher at Dartmouth studying the hippocampus, memory, and perception | Robertson Lab
Associate Prof at University of Amsterdam. Cognitive psychology/neuroscience. Consciousness, decision making, arousal states. PI in the consciousbrainlab.com
Recovering Lawyer. President, @chkbal.bsky.social. Host of the "George Conway Explains It All to Sarah Longwell" podcast on @thebulwark.com.
๐ง๐ท ๐ฆ๐บ Marie Curie Fellow at JLU Giessen ๐ฉ๐ช | Vision | Neuroscience | Deep learning
Slowly becoming a capoeirista ๐คธโโ๏ธ. Views are my own.
PhD and professor. War, economics, math, YouTube. Lines on Maps.
http://youtube.com/gametheory101
Cracking the code for cognition.
PhD student in cognitive neuroscience at @donnerlab.bsky.social as part of @rtg2753.bsky.social in Hamburg
visual neuroscientist at NYU
Professor, Stanford
Vision Neuroscientist
Interested on how the interplay between brain function, structure & computations enables visual perception; and also what are we born with and what develops.
Father, husband, scientist
Philosopher / Cognitive Scientist working on self consciousness and social interactions in humans and artificial agents/ Embodiment/ AI / Art & Science
In Lisbon & London
Cognitive neuroscientist interested in vision & the brain. All views my own. she/her
https://scholar.google.com/citations?user=TvNa77oAAAAJ&hl=en
Postdoc at UT Austin studying real-world vision
Academic Research Fellow, University of Melbourne
Number, space, time, 7T fMRI
https://mathsatthehumanscale.github.io/