Dartmouth Psychological and Brain Sciences's Avatar

Dartmouth Psychological and Brain Sciences

@dartmouthpbs.bsky.social

Official account of the Department of Psychological and Brain Sciences at Dartmouth College. Follow for research, learning resources, events, news, and job postings.

315 Followers  |  39 Following  |  13 Posts  |  Joined: 13.11.2024  |  1.9295

Latest posts by dartmouthpbs.bsky.social on Bluesky

Fig. 1. a. Visual and auditory regions of interest (ROIs). b. Responses in a combination of visual (e.g., early dorsal visual stream; Fig. 1a, middle panel) and auditory regions were used to predict responses in the rest of the brain using MVPN. c. In order to identify brain regions that combine responses from auditory and visual regions, we identified voxels where predictions generated using the combined patterns from auditory regions and one set of visual regions jointly (as shown in Fig.  1b) are significantly more accurate than predictions generated using only auditory regions or only that set of visual regions.

Fig. 1. a. Visual and auditory regions of interest (ROIs). b. Responses in a combination of visual (e.g., early dorsal visual stream; Fig. 1a, middle panel) and auditory regions were used to predict responses in the rest of the brain using MVPN. c. In order to identify brain regions that combine responses from auditory and visual regions, we identified voxels where predictions generated using the combined patterns from auditory regions and one set of visual regions jointly (as shown in Fig. 1b) are significantly more accurate than predictions generated using only auditory regions or only that set of visual regions.

I’m excited to share my 1st first-authored paper, β€œDistinct portions of superior temporal sulcus combine auditory representations with different visual streams” (with @mtfang.bsky.social and @steanze.bsky.social ), now out in The Journal of Neuroscience!
www.jneurosci.org/content/earl...

02.10.2025 15:20 β€” πŸ‘ 19    πŸ” 10    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

Excited to share the preprint for my 1st 1st-author manuscript! @markthornton.bsky.social and I show that people hold robust, structured beliefs about how individual mental states unfold in intensity over time. We find that these beliefs are reflected in other domains of mental state understanding.

16.09.2025 14:46 β€” πŸ‘ 33    πŸ” 6    πŸ’¬ 2    πŸ“Œ 1

Very excited to share @landrybulls.bsky.social's 1st lead-author preprint in my lab! Using datasets from MySocialBrain.org we measured people's beliefs about how mental states change in intensity over time, the dimensional structure of those beliefs, and their correlates: osf.io/preprints/ps... πŸ§΅πŸ‘‡

16.09.2025 15:08 β€” πŸ‘ 21    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0
Effect of confound mass on true positive rates under FDR correction. Confound mass represents how large a confound is in terms of the product of its voxel extent and effect size. Results are shown at differing combinations of true effect size, true effect voxel extent, and sample size.

Effect of confound mass on true positive rates under FDR correction. Confound mass represents how large a confound is in terms of the product of its voxel extent and effect size. Results are shown at differing combinations of true effect size, true effect voxel extent, and sample size.

Inflated surface maps of meta-analytic z-statistics from Neurosynth for low-level confounds (top) and high-level cognitive tasks (bottom). Red reflects positive activations, blue reflects negative (de)activations, and darker colors indicate larger z-statistics. Maps are thresholded at |z| = 1 for visualization purposes.

Inflated surface maps of meta-analytic z-statistics from Neurosynth for low-level confounds (top) and high-level cognitive tasks (bottom). Red reflects positive activations, blue reflects negative (de)activations, and darker colors indicate larger z-statistics. Maps are thresholded at |z| = 1 for visualization purposes.

Effect of confound effect size on true positive rates for task effects under FDR correction. Colors indicate sample sizes: N = 25 in blue, N = 50 in green, and N = 100 in orange. Effect sizes are reflected by the darkness of each color, with light shades representing d = .2, medium d = .5, and dark d = .8. The task brain maps and confound brain maps referenced in each panel are shown in Figure 3.

Effect of confound effect size on true positive rates for task effects under FDR correction. Colors indicate sample sizes: N = 25 in blue, N = 50 in green, and N = 100 in orange. Effect sizes are reflected by the darkness of each color, with light shades representing d = .2, medium d = .5, and dark d = .8. The task brain maps and confound brain maps referenced in each panel are shown in Figure 3.

Effect of FDR-based publication bias on observed confound effects sizes. Simulated meta-analytic confound effect sizes are visualized through violin plots for each combination of task effect and confound effect examined in the neural data simulations. Meta-analyses featuring publication bias (orange) substantially inflate these effect size estimates in all cases, relative to meta-analyses featuring no publication bias (blue).

Effect of FDR-based publication bias on observed confound effects sizes. Simulated meta-analytic confound effect sizes are visualized through violin plots for each combination of task effect and confound effect examined in the neural data simulations. Meta-analyses featuring publication bias (orange) substantially inflate these effect size estimates in all cases, relative to meta-analyses featuring no publication bias (blue).

After 5 years, I finally carved out time to turn this blog post on FDR (markallenthornton.com/blog/fdr-pro...) into a manuscript. The preprint features a much broader range of simulations showing how FDR promotes confounds, and how this effect compounds with publication bias: osf.io/preprints/ps...

29.08.2025 15:43 β€” πŸ‘ 48    πŸ” 15    πŸ’¬ 3    πŸ“Œ 1
Post image Post image Post image

New paper from me at Cognition and Emotion! "Deep neural network models of emotion understanding" I discuss how deep nets can be used as cognitive models of emotion perception, prediction, and regulation: doi.org/10.1080/0269...

(h/t @ltjaql.bsky.social for the illustrations!)

07.08.2025 15:20 β€” πŸ‘ 33    πŸ” 10    πŸ’¬ 1    πŸ“Œ 0
Preview
Ventral Striatal Dopamine Increases following Hippocampal Sharp-Wave Ripples Leading theories suggest that hippocampal replay drives offline learning through coupling with an internal teaching signal such as ventral striatal dopamine (DA); however, the relationship between hip...

Hung-tu Chen, Nicolas Tritsch, Matt van der Meer, and I have submitted a new preprint (doi.org/10.1101/2025...) in which we use simultaneous hippocampal ephys and ventral striatal (VS) fiber photometry to establish a link between sharp-wave ripples (SWRs) and VS dopamine (DA) in mice. (1/9)

04.08.2025 18:30 β€” πŸ‘ 19    πŸ” 5    πŸ’¬ 1    πŸ“Œ 1
Clustrix Documentation β€” Clustrix Documentation

I'm starting to work on a new library, "clustrix" (
clustrix.readthedocs.io/en/latest/) to ease switching between local vs. remote execution in Python scripts, notebooks, etc. This has been a pain point for my group for a while!

30.06.2025 04:47 β€” πŸ‘ 10    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Jason and I wearing Chicago Booth swag, inadvertently looking like new MBA students. We are smiling and celebrating at a restaurant!

Jason and I wearing Chicago Booth swag, inadvertently looking like new MBA students. We are smiling and celebrating at a restaurant!

Next summer I will start as an Assistant Professor of Behavioral Science at the University of Chicago Booth School of Business. I couldn't be more excited! 1/

30.06.2025 21:52 β€” πŸ‘ 85    πŸ” 6    πŸ’¬ 13    πŸ“Œ 3
DataWrangler β€” datawrangler 0.4.0 documentation

🀠 New release announcement for our datawrangler package! Try it using:

pip install --upgrade pydata-wrangler

Lots of awesome performance improvements (including native polars support!), simplified API, support for @hf.co text embeddings, etc. More info here: data-wrangler.readthedocs.org

14.06.2025 12:10 β€” πŸ‘ 4    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Post image Post image Post image

well its officially official - I graduated yesterday! πŸŽ“ super honored to have also received my department’s promise award in brain science research. pumped to start my postdoc next month!!

15.06.2025 15:08 β€” πŸ‘ 77    πŸ” 5    πŸ’¬ 7    πŸ“Œ 0
Title: Representations of what’s possible reflect others’ epistemic states

Authors: Lara Kirfel, Matthew Mandelkern, and Jonathan Scott Phillips

Abstract: People’s judgments about what an agent can do are shaped by various constraints, including probability, morality, and normality. However, little is known about how these representations of possible actionsβ€”what we call modal space representationsβ€”are influenced by an agent’s knowledge of their environment. Across two studies, we investigated whether epistemic constraints systematically shift modal space representations and whether these shifts affect high-level force judgments. Study 1 replicated prior findings that the first actions that come to mind are perceived as the most probable, moral, and normal, and demonstrated that these constraints apply regardless of an agent’s epistemic state. Study 2 showed that limiting an agent’s knowledge changes which actions people perceive to be available for the agent, which in turn affects whether people judged an agent as being β€œforced” to take a particular action. These findings highlight the role of Theory of Mind in modal cognition, revealing how epistemic constraints shape perceptions of possibilities.

Title: Representations of what’s possible reflect others’ epistemic states Authors: Lara Kirfel, Matthew Mandelkern, and Jonathan Scott Phillips Abstract: People’s judgments about what an agent can do are shaped by various constraints, including probability, morality, and normality. However, little is known about how these representations of possible actionsβ€”what we call modal space representationsβ€”are influenced by an agent’s knowledge of their environment. Across two studies, we investigated whether epistemic constraints systematically shift modal space representations and whether these shifts affect high-level force judgments. Study 1 replicated prior findings that the first actions that come to mind are perceived as the most probable, moral, and normal, and demonstrated that these constraints apply regardless of an agent’s epistemic state. Study 2 showed that limiting an agent’s knowledge changes which actions people perceive to be available for the agent, which in turn affects whether people judged an agent as being β€œforced” to take a particular action. These findings highlight the role of Theory of Mind in modal cognition, revealing how epistemic constraints shape perceptions of possibilities.

πŸ”οΈ Brad is lost in the wildernessβ€”but doesn’t know there’s a town nearby. Was he forced to stay put?

In our #CogSci2025 paper, we show that judgments of what’s possibleβ€”and whether someone had to actβ€”depend on what agents know.

πŸ“° osf.io/preprints/ps...

w/ Matt Mandelkern & @jsphillips.bsky.social

16.05.2025 12:04 β€” πŸ‘ 10    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Post image

Excited to see everyone at #VSS2025 - Come check out what my lab has been up to over this past year:

16.05.2025 16:04 β€” πŸ‘ 28    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

Despite everything going on, I may have funds to hire a postdoc this year πŸ˜¬πŸ€žπŸ§‘β€πŸ”¬ Open to a wide variety of possible projects in social and cognitive neuroscience. Get in touch if you are interested! Reposts appreciated.

09.05.2025 19:01 β€” πŸ‘ 131    πŸ” 103    πŸ’¬ 3    πŸ“Œ 5
OSF

New preprint! Thrilled to share my latest work with @esfinn.bsky.social -- "Sensory context as a universal principle of language in humans and LLMs"

osf.io/preprints/ps...

05.05.2025 14:49 β€” πŸ‘ 51    πŸ” 19    πŸ’¬ 3    πŸ“Œ 3
Preview
Real-world objects scaffold visual working memory for features: Increased neural delay activity when colors are remembered as part of meaningful objects Visual working memory is a core cognitive function that allows active storage of task-relevant visual information. While previous studies have postulated that the capacity of this system is fixed with...

New Preprint with @timbrady.bsky.social and @violastoermer.bsky.social : www.biorxiv.org/content/10.1... Here we show increased neural delay activity associated with remembering features as part of real-world objects. 1/

30.04.2025 14:39 β€” πŸ‘ 21    πŸ” 6    πŸ’¬ 2    πŸ“Œ 0

More Dartmouth PBS at #SANS2025!

26.04.2025 22:29 β€” πŸ‘ 8    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Post image Post image Post image

Our third speaker @zizhuangmiao.bsky.social discusses the modality-general nature and overlap between neural correlates of social interaction and theory of mind

26.04.2025 18:31 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 1
Post image

SCRAP Lab had a great time at #SANS2025! Can't wait till next year!

26.04.2025 22:27 β€” πŸ‘ 38    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0

More Dartmouth PBS at #SANS2025!

26.04.2025 19:40 β€” πŸ‘ 20    πŸ” 2    πŸ’¬ 0    πŸ“Œ 1

More Dartmouth PBS at #SANS2025!

26.04.2025 19:39 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Congratulations to Dartmouth PBS's @markthornton.bsky.social for winning the @sansmeeting.bsky.social Early Career Award! πŸ‘πŸ‘πŸ‘

26.04.2025 18:17 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Post image

The home stretch of #SANS2025 kicks off with the early career award talk by @markthornton.bsky.social. Congrats, Mark!!

26.04.2025 17:53 β€” πŸ‘ 17    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Post image Post image Post image

SANS president @ajaysatpute.bsky.social introducing our #SANS2025 early career award winner @markthornton.bsky.social!

26.04.2025 17:51 β€” πŸ‘ 13    πŸ” 3    πŸ’¬ 3    πŸ“Œ 1
Post image Post image

Building on the early foundation identifying "where" to answer questions about "how"

Congratulations again @markthornton.bsky.social

26.04.2025 18:10 β€” πŸ‘ 6    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Post image Post image Post image

Mark shares his work on impression updating in dynamic naturalistic, settings using cutting edge computational tools:

26.04.2025 18:03 β€” πŸ‘ 7    πŸ” 3    πŸ’¬ 0    πŸ“Œ 1

Awesome multimodal work from @markthornton.bsky.social showing how information from face, voice and substance of convos combine to impact person-perception judgements #SANs2025

26.04.2025 18:10 β€” πŸ‘ 12    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0
Post image Post image Post image Post image

Mark Thornton @markthornton.bsky.social responds:

AI models as biomarkers, annotations and cognitive models; and as aids in coding and writing

25.04.2025 17:30 β€” πŸ‘ 7    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Post image

AI in social & affective neuroscience: Caution or acceleration? πŸ€–

Mohammad Atari @mohammadatari.bsky.social πŸ†š Mark Thornton @markthornton.bsky.social

Can AI be our Co-Pilot - or should we slow down?
LaSalle Ballroom, don’t miss it!
#SANS2025

25.04.2025 16:36 β€” πŸ‘ 17    πŸ” 5    πŸ’¬ 1    πŸ“Œ 1
OSF

In a new paper, we demonstrate the perception of possibilities but show that the processes underlying this phenomenon occur before the information reaches high-level cognition. The representation of these possibilities is distinctly perceptual(!) and separate from cognition. osf.io/preprints/ps...

24.04.2025 15:22 β€” πŸ‘ 30    πŸ” 8    πŸ’¬ 1    πŸ“Œ 0
Preview
a group of men and women are standing next to each other on a stage dancing . ALT: a group of men and women are standing next to each other on a stage dancing .

We're super excited to announce that we've officially convinced @cgonciulea.bsky.social to join our rag-tag (but VERY classy) team of science nerds this fall as a @dartmouthpbs.bsky.social PhD student πŸŽ‰πŸ₯³πŸ€“πŸ§ πŸ§‘β€πŸ”¬πŸŽ“!!

25.04.2025 03:28 β€” πŸ‘ 16    πŸ” 3    πŸ’¬ 1    πŸ“Œ 1

@dartmouthpbs is following 20 prominent accounts