Jane Han's Avatar

Jane Han

@jane-han.bsky.social

Cognitive Neuroscience PhD student @Haxbylab.bsky.social @DartmouthPBS.bsky.social πŸŽ„ πŸ“š βœπŸ»πŸ§ πŸ‘©πŸ»β€πŸ’» she/her πŸ‡°πŸ‡·

96 Followers  |  187 Following  |  10 Posts  |  Joined: 02.12.2024  |  1.7126

Latest posts by jane-han.bsky.social on Bluesky


Post image

πŸ™Œ Again, another round of applause and huge thanks to the greatest mentor @sam, who kicked off this exciting project with his dissertation. Your guidance was pivotal. I sincerely would not have survived my PhD journey without @samnastase.bsky.social and @haxbylab.bsky.social ...!

19.12.2024 20:45 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

🀯 We were surprised at how well these behavioral-arrangement models capture cortical representational geometry, including in areas like VTβ€”our findings suggest that high-level, behaviorally-relevant features of action understanding occupy a privileged role in cortical representation.

19.12.2024 20:40 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸ₯§ Variance partitioning revealed that behavioral models of transitivity and sociality captured a large portion of unique variance throughout the action observation network, and extending into ventral temporal cortex.

19.12.2024 20:39 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸ’ƒ We found that, out of nine models, the behavioral models capturing the meaning of the actions depicted in the stimuliβ€”the transitivity and sociality modelsβ€”best captured neural representational geometry throughout much of the action observation network.

19.12.2024 20:39 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

🧠 We tested all nine of these models against neural representational geometries (with hybrid hyperalignment based on a separate movie stimulus!) using both a searchlight analysis and in regions of interest.

19.12.2024 20:38 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

🎞️ Finally, we constructed semantic RDMs from word embeddings based on verbs and nonverbs in an annotation of the stimulus, a gaze RDM from a separate eye-tracking sample, and a visual motion energy RDM. This amounted to 3+ hours of fMRI data and 5+ hours of behavioral data per participant (N = 23)!

19.12.2024 20:37 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸ–ΌοΈ We also included three other behavioral arrangement tasks where participants organized static images from the video stimuli according to their visual content: scene, person, and object features.

19.12.2024 20:37 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸ‘† To capture behaviorally-relevant action features, we had participants perform two behavioral arrangement tasks where they organized the action videos according to their object-/goal-related features (transitivity) or their social features (sociality).

19.12.2024 20:36 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

πŸŽ₯ We developed a condition-rich fMRI design with 90 real-world action videos spanning a variety of social and nonsocial action categories. What are the organizing features of observed action representation across cortex? We built several different kinds of models to find out…

19.12.2024 20:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Behaviorally-relevant features of observed actions dominate cortical representational geometry in natural vision available at bioRxiv!

Behaviorally-relevant features of observed actions dominate cortical representational geometry in natural vision available at bioRxiv!

🚨 New paper out with @samnastase.bsky.social and @haxbylab.bsky.social! We use representational similarity analysis to test how well behavioral, semantic, and visual models capture cortical representational geometries when viewing naturalistic action videos: doi.org/10.1101/2024...

19.12.2024 20:34 β€” πŸ‘ 30    πŸ” 11    πŸ’¬ 1    πŸ“Œ 2

@jane-han is following 19 prominent accounts