"A framework for blaming willlful ignorance" 🫣
-- New review paper with @rzultan.bsky.social and @tobigerstenberg.bsky.social now out in "Current Opinion in Psychology".
@larakirfel.bsky.social
CogSci, Philosophy & AI, Postdoc at Max Planck Institute Berlin.
"A framework for blaming willlful ignorance" 🫣
-- New review paper with @rzultan.bsky.social and @tobigerstenberg.bsky.social now out in "Current Opinion in Psychology".
What makes people judge that someone was forced to take a particular action?
Research by @larakirfel.bsky.social et al suggests one influence on this judgment is people’s representation of what an agent knows is possible:
buff.ly/Z0oaRNk
HT @xphilosopher.bsky.social
Title: Representations of what’s possible reflect others’ epistemic states Authors: Lara Kirfel, Matthew Mandelkern, and Jonathan Scott Phillips Abstract: People’s judgments about what an agent can do are shaped by various constraints, including probability, morality, and normality. However, little is known about how these representations of possible actions—what we call modal space representations—are influenced by an agent’s knowledge of their environment. Across two studies, we investigated whether epistemic constraints systematically shift modal space representations and whether these shifts affect high-level force judgments. Study 1 replicated prior findings that the first actions that come to mind are perceived as the most probable, moral, and normal, and demonstrated that these constraints apply regardless of an agent’s epistemic state. Study 2 showed that limiting an agent’s knowledge changes which actions people perceive to be available for the agent, which in turn affects whether people judged an agent as being “forced” to take a particular action. These findings highlight the role of Theory of Mind in modal cognition, revealing how epistemic constraints shape perceptions of possibilities.
🏔️ Brad is lost in the wilderness—but doesn’t know there’s a town nearby. Was he forced to stay put?
In our #CogSci2025 paper, we show that judgments of what’s possible—and whether someone had to act—depend on what agents know.
📰 osf.io/preprints/ps...
w/ Matt Mandelkern & @jsphillips.bsky.social
New paper for #CogSci2025: People cheat more when they delegate to AI. How can we stop this? We tested:
🧠 Explaining what the AI does (transparency)
🗣️ Calling cheating what it is (framing)
Only one worked.
w/ @larakirfel.bsky.social, Anne-Marie Nussberger, Raluca Rilla & @iyadrahwan.bsky.social
❗Now out in "AI and Ethics"❗
What are the consequences of AI that can reason counterfactually? Our new paper explores the ethical dimensions of AI-driven counterfactual world simulation. 🌎 🤖
With @tobigerstenberg.bsky.social, Rob MacCoun and Thomas Icard.
Link: shorturl.at/bHYEO
Does wilful ignorance—intentionally neglecting to ascertain you’re not implicated in criminal activity—make you culpable? Research by @larakirfel.bsky.social & Hannikainen suggests yes, as long as you suspected that may have been the case: https://buff.ly/3XDopzh
HT @xphilosopher.bsky.social
Hi Republica #rp24,
come catch me at my talk on all things AI and Ethical Decisions tonight at Lightning Box 1, 7.15pm. #WhoCares
re-publica.com/en/node/5223
🗣️ People often select only a few events when explaining what happened. What drives people’s explanation selection?
🗞️ In our new paper, we propose a new model and show that people use explanations to communicate effective interventions. #Cogsci2024
🔗 Link to paper: osf.io/preprints/ps...
Menschliche Moral und was das für KI bedeutet! Diese Woche wird es mit @larakirfel.bsky.social wieder spannen auf @realscide.bsky.social!
#wisskomm #scicomm