Susan Ajith

Susan Ajith

@suzibot.bsky.social

PhD with @dkaiserlab.bsky.social, DE | prev IIT-GN, IN | she/her

139 Followers 225 Following 12 Posts Joined Nov 2024
2 weeks ago

How segregated vs. integrated are face and body representations in human visual cortex?

In this new preprint with @kathadobs.bsky.social, we use DNNs and fMRI to find out.

www.biorxiv.org/content/10.6...
#neuroskyence

๐Ÿงต 1/n

26 8 1 1
3 weeks ago

New preprint!

Why do people disagree about what looks beautiful, even when viewing the same stimulus?

We show that shared aesthetic experience is linked to shared gaze during naturalistic viewing: www.biorxiv.org/content/10.6...
1/4

9 3 1 1
1 month ago
Post image

๐Ÿ“ข Workshop announcement.

We are super excited to announce the workshop Perceptual Inferences, from philosophy to neuroscience, organized by Alexander Schรผtz and Daniel Kaiser.

๐Ÿ“ Rauischholzhausen Castle, near Marburg, Germany
๐Ÿ—“๏ธ June 8 to 10, 2026.
1/4

42 17 1 1
1 month ago

Check out our new preprint! ๐ŸŽ‰ We demonstrate that real-world object search is shaped by both objectsโ€™ inherent variability and searchersโ€™ individual priors, using a new approach that combines human drawings with DNN representational similarity analysis. โœ๏ธ๐Ÿ–ฅ๏ธ๐Ÿ‘€

12 4 0 0
1 month ago

8/8 To sum it up, real world search depends on both what youโ€™re looking for and who is looking. Inherent object variability and individual experience with that variability shapes how efficiently we find things in the real world.

1 0 0 0
1 month ago
Post image

7/8 Finally, do individuals guide attention using their own unique object templates? If so, individual differences should be strongest for the most prioritised template (i.e., first drawing). Indeed, targets more similar to a participantโ€™s own first drawing predict their search the most! ๐Ÿšถโ€โ™‚๏ธ๐Ÿ‘๏ธ

0 0 1 0
1 month ago
Post image

6/8 Do individuals prioritise specific object forms in their templates? If drawing order reflects prioritisation, earlier drawings should predict faster search with higher target-template match. By relating target-template similarity to search time, we find first drawing predicts search the most!

0 0 1 0
1 month ago
Post image

5/8 But could variability be individual-specific? When searching for a shoe, I might prioritise a sneaker, while someone else looks for a boot. To test this, we look at drawings of each individual. Here, participants make 4 drawings of each object (1-4), followed by a visual search task.

0 0 1 0
1 month ago
Post image

4/8 To test this, we run a visual search task for 200 real-world objects on an independent set of participants. Voila! As predicted, when an object category is more variable, search takes longer and when itโ€™s less variable, search is faster.

0 0 1 0
1 month ago
Post image

3/8 We use drawings as a window into object variability. By looking at human object drawings and measuring similarity between drawings with a DNN, we capture variability across object categories. This gives us an index of object variability.
Our key question: does variability influence search?

0 0 1 0
1 month ago

2/8 If so, objects with little variability (balls) should activate narrow templates that enable efficient search. In contrast, highly variable categories (shoes) should activate broader templates, making search less efficient.
But how do we measure variability in templates of real-world objects?

0 0 1 0
1 month ago

Real-world objects vary a lot in how they look. Some categories like balls or rackets are relatively consistent, while others like shoes or bags come in many forms (stilettos, boots, sneakers). Could this inherent variability affect the precision of target templates as we search for them?

0 0 1 0
1 month ago

๐Ÿงจ Preprint alert
Is it easier to find a ball than a shoe? The answer lies in how variable we think these objects are in the real-world. www.biorxiv.org/content/10.6...

w/ the amazing @dkaiserlab.bsky.social & @luchunyeh.bsky.social ๐Ÿฆ„

๐Ÿงต1/8

20 8 1 2
2 months ago

Now in press at Nature Communications!
www.nature.com/articles/s41...
Check it out if you are interested in category selectivity, the organization of visual cortex, and topographic models!

22 8 0 0
2 months ago

๐Ÿšจ New preprints out!๐Ÿšจ
Excited to share two new preprints from my #MSCA project. With Daniel @dkaiserlab.bsky.social , Marius @peelen.bsky.social , and Belma Seferovic, we show how contextual associations shape real-world object representations and guide everyday visual task performance.
๐Ÿ‘‡๐Ÿ‘‡๐Ÿ‘‡ 1/n

17 3 1 1
3 months ago

Iโ€™m excited to share the first preprint from my PhD project!
Together with Daniel Kaiser (@dkaiserlab.bsky.social), we investigated how internal models shape inter-individual differences in the perception and neural processing of natural scenes.
Preprint: osf.io/preprints/ps...
1/n

22 7 1 1
3 months ago
Top-down and bottom-up neuroscience as collections of practices - Nature Reviews Neuroscience Nature Reviews Neuroscience - Top-down and bottom-up neuroscience as collections of practices

New Correspondence with @davidpoeppel.bsky.social in Nat Rev Neurosci. www.nature.com/articles/s41...

Here, we critique a recent paper by Rosas et al. We argue that "Bottom-up" and "Top-down" neuroscience have various meanings in the literature.

PDF: rdcu.be/eSKYI

41 15 1 1
3 months ago
Post image

Investigating individual-specific topographic organization has traditionally been a resource-intensive and time-consuming process. But what if we could map visual cortex organization in thousands of brains? Here we offer the community with a toolbox that can do just that! tinyurl.com/deepretinotopy

82 40 4 1
4 months ago
Post image

Really interesting work by Bakhurin and colleagues challenging the reward prediction error hypothesis of dopamine:
www.nature.com/articles/s41...
I love this figure which both echoes and undermines the famous figure from Schultz et al. (1997).

141 52 3 6
5 months ago

How can we characterize the contents of our internal models of the world? We highlight participant-driven approaches, from drawings to descriptions, to study how we expect scenes to look! ๐Ÿคฉ

9 1 1 0
8 months ago

Go Lu! ๐Ÿ’ƒ๐Ÿ’œ

2 0 0 0
8 months ago

How is high-level visual cortex organized?

In a new preprint with @martinhebart.bsky.social & @kathadobs.bsky.social, we show that category-selective areas encode a rich, multidimensional feature space ๐ŸŒˆ

www.biorxiv.org/content/10.1...
#neuroskyence

๐Ÿงต 1/n

75 30 1 4
9 months ago
Post image Post image Post image

#VSS2025 was a blast - great science, fun people, beach vibes. Weโ€˜ll be back! @vssmtg.bsky.social

14 2 0 0
1 year ago

It's very little we understand! There's the data gap but also very little done about the known and existing gaps

6 1 0 0