Susan Ajith's Avatar

Susan Ajith

@suzibot.bsky.social

PhD with @dkaiserlab.bsky.social, DE | prev IIT-GN, IN | she/her

137 Followers  |  223 Following  |  12 Posts  |  Joined: 08.11.2024
Posts Following

Posts by Susan Ajith (@suzibot.bsky.social)

How segregated vs. integrated are face and body representations in human visual cortex?

In this new preprint with @kathadobs.bsky.social, we use DNNs and fMRI to find out.

www.biorxiv.org/content/10.6...
#neuroskyence

🧡 1/n

24.02.2026 12:35 β€” πŸ‘ 25    πŸ” 8    πŸ’¬ 1    πŸ“Œ 1

New preprint!

Why do people disagree about what looks beautiful, even when viewing the same stimulus?

We show that shared aesthetic experience is linked to shared gaze during naturalistic viewing: www.biorxiv.org/content/10.6...
1/4

12.02.2026 10:25 β€” πŸ‘ 8    πŸ” 3    πŸ’¬ 1    πŸ“Œ 1
Post image

πŸ“’ Workshop announcement.

We are super excited to announce the workshop Perceptual Inferences, from philosophy to neuroscience, organized by Alexander SchΓΌtz and Daniel Kaiser.

πŸ“ Rauischholzhausen Castle, near Marburg, Germany
πŸ—“οΈ June 8 to 10, 2026.
1/4

10.02.2026 09:00 β€” πŸ‘ 35    πŸ” 14    πŸ’¬ 1    πŸ“Œ 1

Check out our new preprint! πŸŽ‰ We demonstrate that real-world object search is shaped by both objects’ inherent variability and searchers’ individual priors, using a new approach that combines human drawings with DNN representational similarity analysis. ✍️πŸ–₯οΈπŸ‘€

09.02.2026 10:51 β€” πŸ‘ 12    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0

8/8 To sum it up, real world search depends on both what you’re looking for and who is looking. Inherent object variability and individual experience with that variability shapes how efficiently we find things in the real world.

09.02.2026 10:41 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

7/8 Finally, do individuals guide attention using their own unique object templates? If so, individual differences should be strongest for the most prioritised template (i.e., first drawing). Indeed, targets more similar to a participant’s own first drawing predict their search the most! πŸšΆβ€β™‚οΈπŸ‘οΈ

09.02.2026 10:40 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

6/8 Do individuals prioritise specific object forms in their templates? If drawing order reflects prioritisation, earlier drawings should predict faster search with higher target-template match. By relating target-template similarity to search time, we find first drawing predicts search the most!

09.02.2026 10:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

5/8 But could variability be individual-specific? When searching for a shoe, I might prioritise a sneaker, while someone else looks for a boot. To test this, we look at drawings of each individual. Here, participants make 4 drawings of each object (1-4), followed by a visual search task.

09.02.2026 10:33 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

4/8 To test this, we run a visual search task for 200 real-world objects on an independent set of participants. Voila! As predicted, when an object category is more variable, search takes longer and when it’s less variable, search is faster.

09.02.2026 10:31 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

3/8 We use drawings as a window into object variability. By looking at human object drawings and measuring similarity between drawings with a DNN, we capture variability across object categories. This gives us an index of object variability.
Our key question: does variability influence search?

09.02.2026 10:30 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

2/8 If so, objects with little variability (balls) should activate narrow templates that enable efficient search. In contrast, highly variable categories (shoes) should activate broader templates, making search less efficient.
But how do we measure variability in templates of real-world objects?

09.02.2026 10:27 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Real-world objects vary a lot in how they look. Some categories like balls or rackets are relatively consistent, while others like shoes or bags come in many forms (stilettos, boots, sneakers). Could this inherent variability affect the precision of target templates as we search for them?

09.02.2026 10:26 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

🧨 Preprint alert
Is it easier to find a ball than a shoe? The answer lies in how variable we think these objects are in the real-world. www.biorxiv.org/content/10.6...

w/ the amazing @dkaiserlab.bsky.social & @luchunyeh.bsky.social πŸ¦„

🧡1/8

09.02.2026 10:26 β€” πŸ‘ 19    πŸ” 8    πŸ’¬ 1    πŸ“Œ 2

Now in press at Nature Communications!
www.nature.com/articles/s41...
Check it out if you are interested in category selectivity, the organization of visual cortex, and topographic models!

21.12.2025 12:26 β€” πŸ‘ 21    πŸ” 8    πŸ’¬ 0    πŸ“Œ 0

🚨 New preprints out!🚨
Excited to share two new preprints from my #MSCA project. With Daniel @dkaiserlab.bsky.social , Marius @peelen.bsky.social , and Belma Seferovic, we show how contextual associations shape real-world object representations and guide everyday visual task performance.
πŸ‘‡πŸ‘‡πŸ‘‡ 1/n

16.12.2025 11:50 β€” πŸ‘ 17    πŸ” 3    πŸ’¬ 1    πŸ“Œ 1

I’m excited to share the first preprint from my PhD project!
Together with Daniel Kaiser (@dkaiserlab.bsky.social), we investigated how internal models shape inter-individual differences in the perception and neural processing of natural scenes.
Preprint: osf.io/preprints/ps...
1/n

04.12.2025 14:23 β€” πŸ‘ 21    πŸ” 7    πŸ’¬ 1    πŸ“Œ 1
Top-down and bottom-up neuroscience as collections of practices - Nature Reviews Neuroscience Nature Reviews Neuroscience - Top-down and bottom-up neuroscience as collections of practices

New Correspondence with @davidpoeppel.bsky.social in Nat Rev Neurosci. www.nature.com/articles/s41...

Here, we critique a recent paper by Rosas et al. We argue that "Bottom-up" and "Top-down" neuroscience have various meanings in the literature.

PDF: rdcu.be/eSKYI

02.12.2025 15:13 β€” πŸ‘ 41    πŸ” 15    πŸ’¬ 1    πŸ“Œ 1
Post image

Investigating individual-specific topographic organization has traditionally been a resource-intensive and time-consuming process. But what if we could map visual cortex organization in thousands of brains? Here we offer the community with a toolbox that can do just that! tinyurl.com/deepretinotopy

01.12.2025 11:26 β€” πŸ‘ 82    πŸ” 40    πŸ’¬ 4    πŸ“Œ 1
Post image

Really interesting work by Bakhurin and colleagues challenging the reward prediction error hypothesis of dopamine:
www.nature.com/articles/s41...
I love this figure which both echoes and undermines the famous figure from Schultz et al. (1997).

14.10.2025 11:05 β€” πŸ‘ 141    πŸ” 52    πŸ’¬ 3    πŸ“Œ 6

How can we characterize the contents of our internal models of the world? We highlight participant-driven approaches, from drawings to descriptions, to study how we expect scenes to look! 🀩

08.10.2025 09:27 β€” πŸ‘ 9    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Go Lu! πŸ’ƒπŸ’œ

26.06.2025 16:23 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

How is high-level visual cortex organized?

In a new preprint with @martinhebart.bsky.social & @kathadobs.bsky.social, we show that category-selective areas encode a rich, multidimensional feature space 🌈

www.biorxiv.org/content/10.1...
#neuroskyence

🧡 1/n

18.06.2025 12:28 β€” πŸ‘ 75    πŸ” 30    πŸ’¬ 1    πŸ“Œ 4
Post image Post image Post image

#VSS2025 was a blast - great science, fun people, beach vibes. Weβ€˜ll be back! @vssmtg.bsky.social

22.05.2025 15:37 β€” πŸ‘ 14    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

It's very little we understand! There's the data gap but also very little done about the known and existing gaps

03.03.2025 10:44 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0