Josh Wenger's Avatar

Josh Wenger

@jdweng.bsky.social

PhD Student in Psychology and Social Data Analytics | NSF GRFP Fellow

91 Followers  |  106 Following  |  17 Posts  |  Joined: 18.02.2025
Posts Following

Posts by Josh Wenger (@jdweng.bsky.social)

Preview
People choose to receive human empathy despite rating AI empathy higher - Communications Psychology This work explored whether people would rather choose to receive empathy from human or AI empathizers. When given the choice, participants sought human empathy, despite rating AI responses as more emp...

When given the choice, participants sought human empathy, despite rating AI responses as more empathetic and making them feel more heard.
@jdweng.bsky.social
@dcameron.bsky.social
@minzlicht.bsky.social
www.nature.com/articles/s44...

12.02.2026 17:34 β€” πŸ‘ 22    πŸ” 6    πŸ’¬ 0    πŸ“Œ 0

Our findings highlight the impressive potential of AI for high-quality emotional support, while emphasizing the importance of respecting individual preferences in empathy seeking behavior.

04.02.2026 23:30 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This effect appears for participants’ real-life emotional situations, and even when the human empathizer is an expert (e.g., trained crisis responders).

04.02.2026 23:30 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

In our new research, we examined whether people choose to receive empathy from a human or AI empathizer when given the free option.

Across four studies, we find an β€œAI empathy choice paradox”:
β€”People generally choose human empathizers.
β€”But when they do choose AI, they rate it as more empathetic.

04.02.2026 23:30 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
People choose to receive human empathy despite rating AI empathy higher - Communications Psychology This work explored whether people would rather choose to receive empathy from human or AI empathizers. When given the choice, participants sought human empathy, despite rating AI responses as more emp...

New publication with @dcameron.bsky.social and @minzlicht.bsky.social in @commspsychol.nature.com!

www.nature.com/articles/s44...

AI empathy is good, but would people actually choose to turn to AI for emotional support over a human empathizer?

04.02.2026 23:30 β€” πŸ‘ 19    πŸ” 6    πŸ’¬ 2    πŸ“Œ 2

We also see empathy as part of a broader philosophy of science conversation: how should scientists engage with participant experiences to inform our construct definitions, and how should we bound our constructs as new technologies and relational possibilities emerge?

5/5

16.01.2026 16:36 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Rather than letting modality (human vs. AI) dictate construct boundaries, we suggest grounding empathy in what it does for peopleβ€”and why that matters for theory, measurement, and public relevance. As AI reshapes social interaction, our constructs need to be flexible enough to keep up.

4/5

16.01.2026 16:36 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

In our new preprint, we argue for a functional-relational approach to empathy, highlighting:
- the multiple functions empathy serves
- the role of relational context
- the importance of lived experience in defining psychological constructs

3/5

16.01.2026 16:36 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Traditional models of empathy focus on the empathizer’s embodied emotional experience, which AI lacks. Yet people report feeling cared for by AI. This tension between human experience and researcher-imposed construct definitions raises questions about what it truly means for AI to β€œempathize.”

2/5

16.01.2026 16:36 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
OSF

New preprint with @dcameron.bsky.social and @mgreinecke.bsky.social!

Rethinking empathy in the age of AI osf.io/preprints/ps...

How should we define empathy as a construct in an age where AI can provide quality emotional support, but doesn’t actually feel?

1/5

16.01.2026 16:36 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1
Post image Post image

Wintry walk with the EMP Lab
@amormino.bsky.social @jokretz.bsky.social @farvk.bsky.social @jdweng.bsky.social

@psuliberalarts.bsky.social
@prcpennstate.bsky.social
@ssripennstate.bsky.social
@rockethics.bsky.social

09.12.2025 00:37 β€” πŸ‘ 10    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

Finally, we suggest that questions about whether empathy from one source is inherently β€œbetter” are difficult to answer without grounding them in a normative ethical framework to provide guidance regarding the relative value of different empathic qualities and their effect on well-being.

17.10.2025 19:11 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

From an empathy recipient’s perspective, the preference for one source over another may depend on how they weigh these trade-offs in light of their particular emotional situation. In some moments, accessible empathy may be more valuable than selective empathy.

17.10.2025 19:10 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Human empathy has the potential for unique qualities such as selectivity and effort. However, human empathy and its expression of these qualites manifests in a wide variety of forms. AI empathy, on the other hand, also offers its own unique advantages, including consistency and accessibility.

17.10.2025 19:10 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
OSF

New chapter with @dcameron.bsky.social, Martina Orlandi, and @minzlicht.bsky.social:
osf.io/preprints/ps...

In this chapter, we argue that instead of debating whether human or AI empathy is superior overall, it is more useful to focus on the distinct trade-offs that each source of empathy offers.

17.10.2025 19:09 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 3    πŸ“Œ 1
Picture of members of the Empathy & Moral Psychology (EMP) Lab

Picture of members of the Empathy & Moral Psychology (EMP) Lab

A snapshot of our last summer meeting of the EMP Lab. A Happy Valley welcome to new members, graduate student
@jokretz.bsky.social & post-doc @amormino.bsky.social, & farewell to alum @rachelbuterbaugh.bsky.social, who's off to grad school. With @jdweng.bsky.social @farvk.bsky.social, a great team!

18.08.2025 17:39 β€” πŸ‘ 8    πŸ” 3    πŸ’¬ 2    πŸ“Œ 0
Post image Post image

So grateful for the chance to attend the EASP Summer School organized by @jimaceverett.bsky.social. Huge thanks to @jimaceverett.bsky.social and @mgreinecke.bsky.social for your mentorship in the Moral Psych of AI workstream, and to all of the other amazing students I had the chance to learn from!

01.08.2025 12:39 β€” πŸ‘ 12    πŸ” 5    πŸ’¬ 2    πŸ“Œ 2

Thanks to everybody who chimed in!

I arrived at the conclusion that (1) there's a lot of interesting stuff about interactions and (2) the figure I was looking for does not exist.

So, I made it myself! Here's a simple illustration of how to control for confounding in interactions:>

11.05.2025 05:34 β€” πŸ‘ 1127    πŸ” 273    πŸ’¬ 67    πŸ“Œ 18
Post image

I'll be sharing some data from our recent preprint on AI empathy choice (osf.io/preprints/os...) in a talk at the Society for Affective Science Annual Conference this Saturday. Stop by or reach out if you're interested in talking about AI, empathy, or causal inference!

@affectscience.bsky.social

20.03.2025 19:55 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 0    πŸ“Œ 1

Across multiple studies we examine this AI empathy choice paradox and explore how it varies between empathy vs. compassion, physical vs. emotional suffering, positive vs. negative situations, and explore the importance of perceived empathizer effort.

05.03.2025 13:52 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
OSF

New preprint with @dcameron.bsky.social and @minzlicht.bsky.social!

We find that people choose to receive empathy from human over AI empathizers, despite finding AI responses superior in terms of empathy and making them feel more heard.

Link: osf.io/preprints/os...

05.03.2025 13:52 β€” πŸ‘ 12    πŸ” 3    πŸ’¬ 2    πŸ“Œ 2
Preview
a cat driving a car with the words headed to portland written below it ALT: a cat driving a car with the words headed to portland written below it

On another note, the EMP Lab will be at @affectscience.bsky.social in Portland, Oregon, for anyone who'd like to chat about empathy, motivated emotion regulation, & moral outrage. My grad student @jdweng.bsky.social will be giving his first external talk on human vs. AI empathy. The coffee awaits!

25.02.2025 19:41 β€” πŸ‘ 13    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0