When given the choice, participants sought human empathy, despite rating AI responses as more empathetic and making them feel more heard.
@jdweng.bsky.social
@dcameron.bsky.social
@minzlicht.bsky.social
www.nature.com/articles/s44...
When given the choice, participants sought human empathy, despite rating AI responses as more empathetic and making them feel more heard.
@jdweng.bsky.social
@dcameron.bsky.social
@minzlicht.bsky.social
www.nature.com/articles/s44...
Our findings highlight the impressive potential of AI for high-quality emotional support, while emphasizing the importance of respecting individual preferences in empathy seeking behavior.
04.02.2026 23:30 β π 1 π 0 π¬ 0 π 0This effect appears for participantsβ real-life emotional situations, and even when the human empathizer is an expert (e.g., trained crisis responders).
04.02.2026 23:30 β π 1 π 0 π¬ 1 π 0
In our new research, we examined whether people choose to receive empathy from a human or AI empathizer when given the free option.
Across four studies, we find an βAI empathy choice paradoxβ:
βPeople generally choose human empathizers.
βBut when they do choose AI, they rate it as more empathetic.
New publication with @dcameron.bsky.social and @minzlicht.bsky.social in @commspsychol.nature.com!
www.nature.com/articles/s44...
AI empathy is good, but would people actually choose to turn to AI for emotional support over a human empathizer?
We also see empathy as part of a broader philosophy of science conversation: how should scientists engage with participant experiences to inform our construct definitions, and how should we bound our constructs as new technologies and relational possibilities emerge?
5/5
Rather than letting modality (human vs. AI) dictate construct boundaries, we suggest grounding empathy in what it does for peopleβand why that matters for theory, measurement, and public relevance. As AI reshapes social interaction, our constructs need to be flexible enough to keep up.
4/5
In our new preprint, we argue for a functional-relational approach to empathy, highlighting:
- the multiple functions empathy serves
- the role of relational context
- the importance of lived experience in defining psychological constructs
3/5
Traditional models of empathy focus on the empathizerβs embodied emotional experience, which AI lacks. Yet people report feeling cared for by AI. This tension between human experience and researcher-imposed construct definitions raises questions about what it truly means for AI to βempathize.β
2/5
New preprint with @dcameron.bsky.social and @mgreinecke.bsky.social!
Rethinking empathy in the age of AI osf.io/preprints/ps...
How should we define empathy as a construct in an age where AI can provide quality emotional support, but doesnβt actually feel?
1/5
Wintry walk with the EMP Lab
@amormino.bsky.social @jokretz.bsky.social @farvk.bsky.social @jdweng.bsky.social
@psuliberalarts.bsky.social
@prcpennstate.bsky.social
@ssripennstate.bsky.social
@rockethics.bsky.social
Finally, we suggest that questions about whether empathy from one source is inherently βbetterβ are difficult to answer without grounding them in a normative ethical framework to provide guidance regarding the relative value of different empathic qualities and their effect on well-being.
17.10.2025 19:11 β π 2 π 0 π¬ 0 π 0From an empathy recipientβs perspective, the preference for one source over another may depend on how they weigh these trade-offs in light of their particular emotional situation. In some moments, accessible empathy may be more valuable than selective empathy.
17.10.2025 19:10 β π 1 π 0 π¬ 0 π 0Human empathy has the potential for unique qualities such as selectivity and effort. However, human empathy and its expression of these qualites manifests in a wide variety of forms. AI empathy, on the other hand, also offers its own unique advantages, including consistency and accessibility.
17.10.2025 19:10 β π 1 π 0 π¬ 0 π 0
New chapter with @dcameron.bsky.social, Martina Orlandi, and @minzlicht.bsky.social:
osf.io/preprints/ps...
In this chapter, we argue that instead of debating whether human or AI empathy is superior overall, it is more useful to focus on the distinct trade-offs that each source of empathy offers.
Picture of members of the Empathy & Moral Psychology (EMP) Lab
A snapshot of our last summer meeting of the EMP Lab. A Happy Valley welcome to new members, graduate student
@jokretz.bsky.social & post-doc @amormino.bsky.social, & farewell to alum @rachelbuterbaugh.bsky.social, who's off to grad school. With @jdweng.bsky.social @farvk.bsky.social, a great team!
So grateful for the chance to attend the EASP Summer School organized by @jimaceverett.bsky.social. Huge thanks to @jimaceverett.bsky.social and @mgreinecke.bsky.social for your mentorship in the Moral Psych of AI workstream, and to all of the other amazing students I had the chance to learn from!
01.08.2025 12:39 β π 12 π 5 π¬ 2 π 2
Thanks to everybody who chimed in!
I arrived at the conclusion that (1) there's a lot of interesting stuff about interactions and (2) the figure I was looking for does not exist.
So, I made it myself! Here's a simple illustration of how to control for confounding in interactions:>
I'll be sharing some data from our recent preprint on AI empathy choice (osf.io/preprints/os...) in a talk at the Society for Affective Science Annual Conference this Saturday. Stop by or reach out if you're interested in talking about AI, empathy, or causal inference!
@affectscience.bsky.social
Across multiple studies we examine this AI empathy choice paradox and explore how it varies between empathy vs. compassion, physical vs. emotional suffering, positive vs. negative situations, and explore the importance of perceived empathizer effort.
05.03.2025 13:52 β π 3 π 0 π¬ 0 π 0
New preprint with @dcameron.bsky.social and @minzlicht.bsky.social!
We find that people choose to receive empathy from human over AI empathizers, despite finding AI responses superior in terms of empathy and making them feel more heard.
Link: osf.io/preprints/os...
On another note, the EMP Lab will be at @affectscience.bsky.social in Portland, Oregon, for anyone who'd like to chat about empathy, motivated emotion regulation, & moral outrage. My grad student @jdweng.bsky.social will be giving his first external talk on human vs. AI empathy. The coffee awaits!
25.02.2025 19:41 β π 13 π 3 π¬ 1 π 0