Mark Thornton's Avatar

Mark Thornton

@markthornton.bsky.social

Social neuroscientist studying how people understand and predict each other. Assistant Professor at Dartmouth College. http://markallenthornton.com

2,131 Followers  |  722 Following  |  217 Posts  |  Joined: 13.07.2023
Posts Following

Posts by Mark Thornton (@markthornton.bsky.social)

Post image

Excited to share that our lab will be presenting multiple projects at SPSP 2026!

If you’re interested in social perception, race talk, intergroup dynamics, or collective action β€” come check us out!

#SPSP2026 #SocialPsychology #PersonalityPsychology #AcademicResearch #RaceTalk

25.02.2026 19:34 β€” πŸ‘ 19    πŸ” 7    πŸ’¬ 2    πŸ“Œ 0
Post image

Excited to share new work on how the brain makes social inferences from visual input! πŸ§ πŸ‘―β€β™‚οΈ
(With @lisik.bsky.social , @shariliu.bsky.social, @tianminshu.bsky.social , and Minjae Kim!) www.biorxiv.org/content/10.6...

26.02.2026 22:09 β€” πŸ‘ 42    πŸ” 16    πŸ’¬ 1    πŸ“Œ 2
Post image

We're excited about the upcoming Computational Psychology preconference at @spspnews.bsky.social this Thursday. See our action-packed full day agenda below! Featuring 3 keynote talk themes with related early-career speakers, data blitz session, panel discussion. Don't miss it! #SPSP

24.02.2026 18:08 β€” πŸ‘ 21    πŸ” 9    πŸ’¬ 1    πŸ“Œ 1
Congratulations to the 2026 APS Spence Award recipients: Dorsa Amir, William Brady, Emily Finn, Daniel Yon, Yuan Chang Leong, Andrew Grotzinger.

Congratulations to the 2026 APS Spence Award recipients: Dorsa Amir, William Brady, Emily Finn, Daniel Yon, Yuan Chang Leong, Andrew Grotzinger.

Post image

Congratulations to the 2026 APS Spence Award Recipients! @dorsaamir.bsky.social, @williambrady.bsky.social, @esfinn.bsky.social, @andrewgrotzinger.bsky.social, @ycleong.bsky.social, @danieljamesyon.bsky.social,

www.psychologicalscience.org/members/awar...

23.02.2026 14:33 β€” πŸ‘ 59    πŸ” 8    πŸ’¬ 0    πŸ“Œ 11

I will be hiring a full-time pre-doctoral Research Professional to work with me at Chicago Booth.

Know someone interested in studying conversation and connection? Please help spread the word!

More details, including application instructions, are here: www.chicagobooth.edu/-/media/facu...

13.02.2026 17:51 β€” πŸ‘ 35    πŸ” 29    πŸ’¬ 0    πŸ“Œ 3

Check out the preprint here: osf.io/preprints/ps...

Data and code here: osf.io/8bqgw/ and here: openneuro.org/datasets/ds0...

11.02.2026 20:26 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This work also demonstrates how emerging computational tools - like the deep neural network we used to manipulate these stimuli in a way that none of participants realized - can help us overcome traditional tradeoffs in psychological and neuroscientific research.

11.02.2026 20:26 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This reserach further underscores the important of understanding others' minds in shaping our broader understanding of the world and the people within it.

11.02.2026 20:26 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

These findings clarify the roles of these regions in social cognition, indicating among other things that the STS is not merely involved in mental state inference from multimodal cues, but also in the later use of that information in support of downstream cognitive processes.

11.02.2026 20:26 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We find that this both of these process are supported by portions of lateral temporal cortex, extending from the posterior to anterior superior temporal sulcus (STS), as well as an anterior portion of lateral prefrontal cortex.

11.02.2026 20:26 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Importantly, this design allows us to dissociate the normal process of inferring others' mental states (e.g., from a combination of noisy cues) from the process of *using* knowledge of others mental states to predict events or change our minds about others.

11.02.2026 20:26 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

By randomly manipulation access to characters' thoughts via inner monologue narration in naturalistic stimuli, we take an important step towards overcoming this tradeoff, and understanding how mental state knowledge shapes narrative processing and trait impression updating.

11.02.2026 20:26 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Prior studies on these topics have faced a tradeoff between internal and external validity, either using artificial paradigms with clean manipulations to justify stronger causal inference, or using naturalistic paradigms to better generalization to real world contexts.

11.02.2026 20:26 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

New preprint from Lindsey Tepfer (@ltjaql.bsky.social) and me! We silenced portions of internal monologues in two films to manipulate participants' access to characters' thoughts. Using ISC and RSA, we found that this aligned later neural processing of the narrative & encoding of trait impressions.

11.02.2026 20:26 β€” πŸ‘ 48    πŸ” 16    πŸ’¬ 2    πŸ“Œ 1
Post image Post image

Congratulations to the SANS Early-Career Award Winner Elisa Back (@elisabaek.bsky.social) and Mid-Career Award Winner Shuo Wang!

Celebrate their achievement with us at #SANS2026 in San Diego!

09.02.2026 15:44 β€” πŸ‘ 11    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

The Visual Learning Lab is hiring TWO lab coordinators!

Both positions are ideal for someone looking for research experience before applying to graduate school. Application deadline is Feb 10th (approaching fast!)β€”with flexible summer start dates.

30.01.2026 23:21 β€” πŸ‘ 48    πŸ” 41    πŸ’¬ 1    πŸ“Œ 0
Preview
I’m going to halve my publication output. You should consider slow science, too If we don’t slow down, the research enterprise is going to crash, argues Adrian Barnett.

Great piece on prioritizing quality over quantity in scientific publication.

For those of us with labs, this necessarily involves shrinking our group size. After I got tenure I started to downsize my lab and have not regretted it one iota. More time for each student & more time to think & write.

20.01.2026 14:21 β€” πŸ‘ 115    πŸ” 31    πŸ’¬ 5    πŸ“Œ 11

Project Implicit is facing an existential threat. After almost 30 years, 60 million visitors, and hundreds of published papers, funding for our work has disappeared.

We’ve never held a fundraising drive before, but we need your support to keep our site running. Please consider donating! πŸ™

08.01.2026 18:10 β€” πŸ‘ 31    πŸ” 25    πŸ’¬ 0    πŸ“Œ 2

Our new paper out in NHB! We started this back in @ptoncompmemlab.bsky.social's lab when I was a postdoc and Rolando was a grad student, showing that stable fMRI representations of places (learned in Rolando's custom-made VR world) provide the best anchors for later item learning

05.01.2026 19:10 β€” πŸ‘ 40    πŸ” 14    πŸ’¬ 1    πŸ“Œ 0
Redirecting

New study out in Neuron: doi.org/10.1016/j.ne.... This work led by Zaid Zada uses fMRI hyperscanning of real dyads to show that speaking and listening rely on shared neural systems; and that conversation recruits unique brain processes that aren't observed in passive comprehension.

18.12.2025 22:08 β€” πŸ‘ 15    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0

Yeah, for now, it's not too hard to stay ahead of what is practial for LLM-powered bots. For example, we wanted to collect narratives in a recent study, and just doing that using audio vs. text boxes was enough to eliminate any obvious automation. Not a long-term defense, but good enough for now.

16.12.2025 05:30 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Yeah, I mean, I'm sympathetic to the worries. Particuarly for surveys/purely linguistic stimuli. But I don't think current LLMs can handle audivisual stimuli nearly as well. And also, just because it's possible to use an agent to automate surveys, doesn't necessarily mean it's practical/economical.

16.12.2025 05:24 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Modeling emotion in complex stories: the Stanford Emotional Narratives Dataset Human emotions unfold over time, and more affective computing research has to prioritize capturing this crucial component of real-world affect. Modeling dynamic emotional stimuli requires solving the ...

Participants had to continuously rate the traits of people in the Stanford Emotional Narratives Dataset (arxiv.org/abs/1912.05008) as described in this preprint (osf.io/preprints/ps...). I was apprehensive, but inter-participant reliability was on par with what we got several years ago.

16.12.2025 01:17 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
continuous-rater Continuous-rater is a standalone web app built with Svelte that allows users toΒ provide continuous ratings on any given dimension (e.g., how happy they are feeling or how frustrated they are feeling)Β ...

I had the opposite experience lately - collecting some online (Cloud Connect) norming data for the first time in a while, and we got results which were as good, if not better, than what we got pre-LLMs. Task wasn't super LLM-able though (a variant of this: ui.adsabs.harvard.edu/abs/2020zndo...).

16.12.2025 01:17 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Abstract

When we empathize with someone going through something, we often draw on our past experiences with the someone and the something. These kinds of experiences ground "thick empathy", a form of empathy that has been largely overlooked in the psychology and neuroscience literature. Consider how a mother, empathizing with her daughter about to give birth, can draw on her own experience of childbirth, and her relationship with her daughter, to deeply grasp what her daughter is going through in a way that others who lack those experiences cannot. I argue that thick empathy deserves more empirical attention because it is associated with well-being and helps us build networks of effective mutual social support. My analysis highlights novel risks and dilemmas posed by "empathy machines" that promise to enhance or even replace human empathy and are becoming increasingly popular as a potential solution to widespread loneliness. Even when empathy machines provide value to individuals, their widespread adoption risks imposing collective emotional and epistemic costs that ultimately make it harder for us to empathize well.

Keywords: empathy, understanding, experience, thick description, ethnography, phenomenal knowledge, interpersonal knowledge, virtual reality, artificial intelligence, chatbots

Abstract When we empathize with someone going through something, we often draw on our past experiences with the someone and the something. These kinds of experiences ground "thick empathy", a form of empathy that has been largely overlooked in the psychology and neuroscience literature. Consider how a mother, empathizing with her daughter about to give birth, can draw on her own experience of childbirth, and her relationship with her daughter, to deeply grasp what her daughter is going through in a way that others who lack those experiences cannot. I argue that thick empathy deserves more empirical attention because it is associated with well-being and helps us build networks of effective mutual social support. My analysis highlights novel risks and dilemmas posed by "empathy machines" that promise to enhance or even replace human empathy and are becoming increasingly popular as a potential solution to widespread loneliness. Even when empathy machines provide value to individuals, their widespread adoption risks imposing collective emotional and epistemic costs that ultimately make it harder for us to empathize well. Keywords: empathy, understanding, experience, thick description, ethnography, phenomenal knowledge, interpersonal knowledge, virtual reality, artificial intelligence, chatbots

New preprint: Empathy, Thick and Thin
papers.ssrn.com/sol3/papers....

It is perhaps foolhardy to attempt to say something new about a topic as widely studied as empathy. I tried anyway! 1/

11.12.2025 20:50 β€” πŸ‘ 251    πŸ” 67    πŸ’¬ 11    πŸ“Œ 11
Preview
Theory of Minds: Early Understanding of Interacting Minds The idea that we understand others’ actions in terms of their underlying mental states has shaped decades of developmental research on social cognition. Existing work, however, has primarily focused o...

Officially out! In this review, Aaron Chuey and I discuss how existing work on ToM mostly focused on a single individual’s mental states (e.g., what Sally thinks). Extending ToM, we argue for ToMSβ€”an understanding of how multiple individuals communicate and influence each others’ minds. t.ly/u4rtb

10.12.2025 23:30 β€” πŸ‘ 45    πŸ” 16    πŸ’¬ 0    πŸ“Œ 0

SCRAP yard is just waiting there for me πŸ˜‚

26.11.2025 21:50 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
How physical information is used to make sense of the psychological world - Nature Reviews Psychology Reasoning about minds and reasoning about physical objects are governed by two distinct systems. In this Perspective, Liu et al. review research from developmental psychology and cognitive neuroscienc...

New perspective paper (w/ @sedaakbiyik.bsky.social, Joseph Outa, & @minjaek.bsky.social ) in @natrevpsychol.nature.com βš½πŸ’­πŸ§ πŸ‘Ά : www.nature.com/articles/s44...

24.11.2025 23:14 β€” πŸ‘ 58    πŸ” 25    πŸ’¬ 1    πŸ“Œ 0
Preview
Towards an informational account of interpersonal coordination - Nature Reviews Neuroscience Methodological shortcomings have constrained studies describing the complex dynamics of interpersonal coordination, which is essential to human sociality. In this Perspective, Chidichimo et al. advanc...

Not sure if you shared the actual paper link in your post? www.nature.com/articles/s41... My lab shared it on slack yesterday - we loved it!

19.11.2025 23:28 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Delighted to share our new Perspective article @natrevneuro.nature.com, led by the great @edoardochidichimo.bsky.social : "Towards an informational account of interpersonal coordination". With @loopyluppi.bsky.social, Pedro Mediano, @introspection.bsky.social, Victoria Leong and Richard Bethlehem.

19.11.2025 14:27 β€” πŸ‘ 36    πŸ” 15    πŸ’¬ 1    πŸ“Œ 2