Hannah Small's Avatar

Hannah Small

@hsmall.bsky.social

5th year PhD student in Cognitive Science at Johns Hopkins, working with Leyla Isik https://www.hannah-small.com/

95 Followers  |  121 Following  |  9 Posts  |  Joined: 25.09.2023  |  2.6686

Latest posts by hsmall.bsky.social on Bluesky

Preview
Simple 3D Pose Features Support Human and Machine Social Scene Understanding Humans can quickly and effortlessly extract a variety of information about others' social interactions from visual input, ranging from visuospatial cues like whether two people are facing each other t...

Why do AI models struggle with social scenes? 🧐 Our new preprint with @lisik.bsky.social reveals a fundamental gap: most AI vision models lack explicit 3D pose information that humans rely on for social judgments.

Read the full work: arxiv.org/abs/2511.03988

10.11.2025 00:38 β€” πŸ‘ 21    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

Excited to share our work on mechanisms of naturalistic audiovisual processing in the human brain 🧠🎬!!
www.biorxiv.org/content/10.1...

07.11.2025 16:01 β€” πŸ‘ 6    πŸ” 5    πŸ’¬ 9    πŸ“Œ 2
Call for applications to cognitive science PhD program with QR code to the link above

Call for applications to cognitive science PhD program with QR code to the link above

The department of Cognitive Science @jhu.edu is seeking motivated students interested in joining our interdisciplinary PhD program! Applications due 1 Dec

Our PhD students also run an application mentoring program for prospective students. Mentoring requests due November 15.

tinyurl.com/2nrn4jf9

30.10.2025 19:09 β€” πŸ‘ 12    πŸ” 9    πŸ’¬ 0    πŸ“Œ 2
Post image

🚨New preprint w/ @lisik.bsky.social!
Aligning Video Models with Human Social Judgments via Behavior-Guided Fine-Tuning

We introduce a ~49k triplet social video dataset, uncover a modality gap (language > video), and close via novel behavior-guided fine-tuning.
πŸ”— arxiv.org/abs/2510.01502

03.10.2025 13:48 β€” πŸ‘ 22    πŸ” 6    πŸ’¬ 1    πŸ“Œ 1
Laboratory Technician About the Opportunity SUMMARY The Subjectivity Lab, directed by Jorge Morales, and housed in the Department of Psychology at Northeastern University is excited to invite applications for a full-time L...

🚨🚨🚨 The Subjectivity Lab is looking for a lab manager! The position is available immediately. We want someone who can help coordinate our large sample fMRI study, plus other behavioral work. Because *gestures at everything* the job was approved only now (ends in June 2026). Great opportunity! 🧡 1/4

29.09.2025 14:22 β€” πŸ‘ 22    πŸ” 29    πŸ’¬ 2    πŸ“Œ 1
Post image

My lab at USC is recruiting!
1) research coordinator: perfect for a recent graduate looking for research experience before applying to PhD programs: usccareers.usc.edu REQ20167829
2) PhD students: see FAQs on lab website dornsife.usc.edu/hklab/faq/

28.09.2025 21:46 β€” πŸ‘ 40    πŸ” 25    πŸ’¬ 1    πŸ“Œ 1
Preview
GitHub - Isik-lab/ubiquitous-vis: Code for paper 'Ubiquitous cortical sensitivity to visual information during naturalistic, audiovisual movie viewing' Code for paper 'Ubiquitous cortical sensitivity to visual information during naturalistic, audiovisual movie viewing' - GitHub - Isik-lab/ubiquitous-vis: Code for paper 'Ubiquitous cor...

These findings highlight the importance of visual-semantic signals, above and beyond spoken language content, across cortex, even in the language network.
The code to replicate the analyses and figures is available here: github.com/Isik-lab/ubi...
8/8

24.09.2025 19:52 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Follow-up analyses showed that both social perception and language regions were best predicted by later vision model layers that map onto both high-level social semantic signals (valence, the presence of a social interaction, faces).
7/n

24.09.2025 19:51 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Importantly, vision and language embeddings are only weakly correlated throughout the movie, suggesting that the vision and language embeddings are each predicting distinct variance in the neural responses.
6/n

24.09.2025 19:51 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We find that vision embeddings dominate prediction across cortex. Surprisingly, even language-selective regions were well predicted by vision model embeddings, as well as or better than language model features.
5/n

24.09.2025 19:51 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We densely labeled the vision and language features of the movie using a combination of human annotations and vision and language deep neural network (DNN) models and linearly mapped these features to fMRI responses using an encoding model
4/n

24.09.2025 19:49 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

To address this, we collected fMRI data from 34 participants while they watched a 45- minute naturalistic audiovisual movie. Critically, we used functional localizer experiments to identify social interaction perception and language-selective regions in the same participants.
3/n

24.09.2025 19:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Humans effortlessly extract social information from both the vision and language signals around us. However, most work (even most naturalistic fMRI encoding work) is limited to studying unimodal processing. How does the brain process simultaneous multimodal social signals?
2/n

24.09.2025 19:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Excited to share new work with @hleemasson.bsky.social , Ericka Wodka, Stewart Mostofsky and @lisik.bsky.social! We investigated how simultaneous vision and language signals are combined in the brain using naturalistic+controlled fMRI. Read the paper here: osf.io/b5p4n
1/n

24.09.2025 19:46 β€” πŸ‘ 48    πŸ” 11    πŸ’¬ 1    πŸ“Œ 2
Post image

What shapes the topography of high-level visual cortex?

Excited to share a new pre-print addressing this question with connectivity-constrained interactive topographic networks, titled "Retinotopic scaffolding of high-level vision", w/ Marlene Behrmann & David Plaut.

🧡 ↓ 1/n

16.06.2025 15:11 β€” πŸ‘ 67    πŸ” 24    πŸ’¬ 1    πŸ“Œ 0

Despite everything going on, I may have funds to hire a postdoc this year πŸ˜¬πŸ€žπŸ§‘β€πŸ”¬ Open to a wide variety of possible projects in social and cognitive neuroscience. Get in touch if you are interested! Reposts appreciated.

09.05.2025 19:01 β€” πŸ‘ 131    πŸ” 103    πŸ’¬ 3    πŸ“Œ 5
Post image

πŸ“’ Excited to announce our paper at #ICLR2025: β€œModeling dynamic social vision highlights gaps between deep learning and humans”! w/ @emaliemcmahon.bsky.social, Colin Conwell, Mick Bonner, @lisik.bsky.social
‬

β€ͺπŸ“† Thur, Apr, 24: 3:00-5:30 - Poster session 2 (#64) ‬
β€ͺπŸ“„ bit.ly/4jISKES%E2%8...Β [1/6]

23.04.2025 18:07 β€” πŸ‘ 9    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Shown is an example image that participants viewed either in EEG, fMRI, and a behavioral annotation task. There is also a schematic of a regression procedure for jointly predicting fMRI responses from stimulus features and EEG activity.

Shown is an example image that participants viewed either in EEG, fMRI, and a behavioral annotation task. There is also a schematic of a regression procedure for jointly predicting fMRI responses from stimulus features and EEG activity.

I am excited to share our recent preprint and the last paper of my PhD! Here, @imelizabeth.bsky.social, @lisik.bsky.social, Mick Bonner, and I investigate the spatiotemporal hierarchy of social interactions in the lateral visual stream using EEG-fMRI.

osf.io/preprints/ps...

#CogSci #EEG

23.04.2025 15:34 β€” πŸ‘ 27    πŸ” 9    πŸ’¬ 1    πŸ“Œ 0

This is incredibly cool: if you search for a condition that’s affected your family, the site returns stats on how much NIH has done for that disease, *and* a contact form for reaching out to tell your Members of Congress why you want to see them defend NIH.

Pass it on!

21.04.2025 13:06 β€” πŸ‘ 619    πŸ” 407    πŸ’¬ 4    πŸ“Œ 3
Preview
The cerebellar components of the human language network The cerebellum's capacity for neural computation is arguably unmatched. Yet despite evidence of cerebellar contributions to cognition, including language, its precise role remains debated. Here, we sy...

New paper! 🧠 **The cerebellar components of the human language network**

with: @hsmall.bsky.social @moshepoliak.bsky.social @gretatuckute.bsky.social @benlipkin.bsky.social @awolna.bsky.social @aniladmello.bsky.social and @evfedorenko.bsky.social

www.biorxiv.org/content/10.1...

1/n 🧡

21.04.2025 15:19 β€” πŸ‘ 50    πŸ” 20    πŸ’¬ 2    πŸ“Œ 3
Preview
Technical Associate I, Kanwisher Lab MIT - Technical Associate I, Kanwisher Lab - Cambridge MA 02139

I’m hiring a full-time lab tech for two years starting May/June. Strong coding skills required, ML a plus. Our research on the human brain uses fMRI, ANNs, intracranial recording, and behavior. A great stepping stone to grad school. Apply here:
careers.peopleclick.com/careerscp/cl...
......

26.03.2025 15:09 β€” πŸ‘ 64    πŸ” 48    πŸ’¬ 5    πŸ“Œ 3
Preview
Rescinded NIH & NSF Grants

Substantial updates to the list of cancelled grantsπŸ‘‡

- THANK YOU to all who have contributed. Crowdsourcing restores faith in humanity.

- It's still a work in progress. You'll see more updates shortly.

- There are multiple teams & efforts engaged in tracking & advocacy. More to come soon!

18.03.2025 04:24 β€” πŸ‘ 255    πŸ” 157    πŸ’¬ 14    πŸ“Œ 20
Preview
Grad Admission Impacts Survey It is grad admissions season and many postbacs are feeling the chilling impacts of the Trump administration's recent executive orders freezing and slashing extramural research funding. Dozens of gradu...

As a result of Trump’s slashes to research funding, dozens of graduate programs have announced reductions and cancellations of graduate admissions slots.

If you are an impacted applicant, please fill out this survey: docs.google.com/forms/d/e/1F...

πŸ§ͺπŸ§ πŸ§¬πŸ”¬πŸ₯ΌπŸ‘©πŸΌβ€πŸ”¬πŸ§‘β€πŸ”¬

24.02.2025 03:51 β€” πŸ‘ 263    πŸ” 270    πŸ’¬ 8    πŸ“Œ 5
Preview
EvLab Our research aims to understand how the language system works and how it fits into the broader landscape of the human mind and brain.

Our language neuroscience lab (evlab.mit.edu) is looking for a new lab manager/FT RA to start in the summer. Apply here: tinyurl.com/3r346k66 We'll start reviewing apps in early Mar. (Unfortunately, MIT does not sponsor visas for these positions, but OPT works.)

05.02.2025 14:43 β€” πŸ‘ 30    πŸ” 20    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

Hey Bsky friends on #neuroskyence! Very excited to share our
@iclr-conf.bsky.social paper: TopoNets! High-performing vision and language models with brain-like topography! Expertly led by grad student Mayukh and Mainak! A brief thread...

30.01.2025 15:23 β€” πŸ‘ 56    πŸ” 18    πŸ’¬ 3    πŸ“Œ 3
Google Forms: Sign-in Access Google Forms with a personal Google account or Google Workspace account (for business use).

✨i'm hiring a lab manager, with a start date of ~September 2025! to express interest, please complete this google form: forms.gle/GLyAbuD779Rz...

looking for someone to join our multi-disciplinary team, using OPM, EEG, iEEG and computational techniques to study speech and language processing! 🧠

13.12.2024 01:13 β€” πŸ‘ 103    πŸ” 64    πŸ’¬ 2    πŸ“Œ 3
Post image

Our paper "Relational visual representations underlie human social interaction recognition" led by @manasimalik.bsky.social is now out in Nature Communications
www.nature.com/articles/s41...

13.11.2023 15:54 β€” πŸ‘ 30    πŸ” 13    πŸ’¬ 1    πŸ“Œ 0
Post image

Our paper "Hierarchical organization of social action features in the lateral visual stream" led by @emaliemcmahon.bsky.social with Mick Bonner is now out in @currentbiology.bsky.social

www.sciencedirect.com/science/arti...

01.11.2023 16:55 β€” πŸ‘ 32    πŸ” 13    πŸ’¬ 0    πŸ“Œ 1

If you are interested in PhD Application Mentoring for the JHU Cog Sci program, fill out the interest form here! forms.gle/aBuBLzSa4Qje...

25.10.2023 21:47 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
Rapid processing of observed touch through social perceptual brain regions: an EEG-fMRI fusion study Seeing social touch triggers a strong social-affective response that involves multiple brain networks, including visual, social perceptual, and somatosensory systems. Previous studies have identified ...

My recent work with @lisik.bsky.social on social touch is now published on JNeuro. We show that the brain detects the social-affective significance of observed touch at an early stage, within the time frame of feedforward visual processing through social perceptual brain regions.

24.10.2023 09:25 β€” πŸ‘ 17    πŸ” 13    πŸ’¬ 0    πŸ“Œ 1

@hsmall is following 20 prominent accounts