How segregated vs. integrated are face and body representations in human visual cortex?
In this new preprint with @kathadobs.bsky.social, we use DNNs and fMRI to find out.
www.biorxiv.org/content/10.6...
#neuroskyence
π§΅ 1/n
How segregated vs. integrated are face and body representations in human visual cortex?
In this new preprint with @kathadobs.bsky.social, we use DNNs and fMRI to find out.
www.biorxiv.org/content/10.6...
#neuroskyence
π§΅ 1/n
New preprint!
Why do people disagree about what looks beautiful, even when viewing the same stimulus?
We show that shared aesthetic experience is linked to shared gaze during naturalistic viewing: www.biorxiv.org/content/10.6...
1/4
π’ Workshop announcement.
We are super excited to announce the workshop Perceptual Inferences, from philosophy to neuroscience, organized by Alexander SchΓΌtz and Daniel Kaiser.
π Rauischholzhausen Castle, near Marburg, Germany
ποΈ June 8 to 10, 2026.
1/4
Check out our new preprint! π We demonstrate that real-world object search is shaped by both objectsβ inherent variability and searchersβ individual priors, using a new approach that combines human drawings with DNN representational similarity analysis. βοΈπ₯οΈπ
09.02.2026 10:51 β π 12 π 4 π¬ 0 π 08/8 To sum it up, real world search depends on both what youβre looking for and who is looking. Inherent object variability and individual experience with that variability shapes how efficiently we find things in the real world.
09.02.2026 10:41 β π 1 π 0 π¬ 0 π 07/8 Finally, do individuals guide attention using their own unique object templates? If so, individual differences should be strongest for the most prioritised template (i.e., first drawing). Indeed, targets more similar to a participantβs own first drawing predict their search the most! πΆββοΈποΈ
09.02.2026 10:40 β π 0 π 0 π¬ 1 π 06/8 Do individuals prioritise specific object forms in their templates? If drawing order reflects prioritisation, earlier drawings should predict faster search with higher target-template match. By relating target-template similarity to search time, we find first drawing predicts search the most!
09.02.2026 10:38 β π 0 π 0 π¬ 1 π 05/8 But could variability be individual-specific? When searching for a shoe, I might prioritise a sneaker, while someone else looks for a boot. To test this, we look at drawings of each individual. Here, participants make 4 drawings of each object (1-4), followed by a visual search task.
09.02.2026 10:33 β π 0 π 0 π¬ 1 π 04/8 To test this, we run a visual search task for 200 real-world objects on an independent set of participants. Voila! As predicted, when an object category is more variable, search takes longer and when itβs less variable, search is faster.
09.02.2026 10:31 β π 0 π 0 π¬ 1 π 0
3/8 We use drawings as a window into object variability. By looking at human object drawings and measuring similarity between drawings with a DNN, we capture variability across object categories. This gives us an index of object variability.
Our key question: does variability influence search?
2/8 If so, objects with little variability (balls) should activate narrow templates that enable efficient search. In contrast, highly variable categories (shoes) should activate broader templates, making search less efficient.
But how do we measure variability in templates of real-world objects?
Real-world objects vary a lot in how they look. Some categories like balls or rackets are relatively consistent, while others like shoes or bags come in many forms (stilettos, boots, sneakers). Could this inherent variability affect the precision of target templates as we search for them?
09.02.2026 10:26 β π 0 π 0 π¬ 1 π 0
𧨠Preprint alert
Is it easier to find a ball than a shoe? The answer lies in how variable we think these objects are in the real-world. www.biorxiv.org/content/10.6...
w/ the amazing @dkaiserlab.bsky.social & @luchunyeh.bsky.social π¦
π§΅1/8
Now in press at Nature Communications!
www.nature.com/articles/s41...
Check it out if you are interested in category selectivity, the organization of visual cortex, and topographic models!
π¨ New preprints out!π¨
Excited to share two new preprints from my #MSCA project. With Daniel @dkaiserlab.bsky.social , Marius @peelen.bsky.social , and Belma Seferovic, we show how contextual associations shape real-world object representations and guide everyday visual task performance.
πππ 1/n
Iβm excited to share the first preprint from my PhD project!
Together with Daniel Kaiser (@dkaiserlab.bsky.social), we investigated how internal models shape inter-individual differences in the perception and neural processing of natural scenes.
Preprint: osf.io/preprints/ps...
1/n
New Correspondence with @davidpoeppel.bsky.social in Nat Rev Neurosci. www.nature.com/articles/s41...
Here, we critique a recent paper by Rosas et al. We argue that "Bottom-up" and "Top-down" neuroscience have various meanings in the literature.
PDF: rdcu.be/eSKYI
Investigating individual-specific topographic organization has traditionally been a resource-intensive and time-consuming process. But what if we could map visual cortex organization in thousands of brains? Here we offer the community with a toolbox that can do just that! tinyurl.com/deepretinotopy
01.12.2025 11:26 β π 82 π 40 π¬ 4 π 1
Really interesting work by Bakhurin and colleagues challenging the reward prediction error hypothesis of dopamine:
www.nature.com/articles/s41...
I love this figure which both echoes and undermines the famous figure from Schultz et al. (1997).
How can we characterize the contents of our internal models of the world? We highlight participant-driven approaches, from drawings to descriptions, to study how we expect scenes to look! π€©
08.10.2025 09:27 β π 9 π 1 π¬ 1 π 0Go Lu! ππ
26.06.2025 16:23 β π 2 π 0 π¬ 0 π 0
How is high-level visual cortex organized?
In a new preprint with @martinhebart.bsky.social & @kathadobs.bsky.social, we show that category-selective areas encode a rich, multidimensional feature space π
www.biorxiv.org/content/10.1...
#neuroskyence
π§΅ 1/n
#VSS2025 was a blast - great science, fun people, beach vibes. Weβll be back! @vssmtg.bsky.social
22.05.2025 15:37 β π 14 π 2 π¬ 0 π 0It's very little we understand! There's the data gap but also very little done about the known and existing gaps
03.03.2025 10:44 β π 6 π 1 π¬ 0 π 0