Lara Kirfel's Avatar

Lara Kirfel

@larakirfel.bsky.social

CogSci, Philosophy & AI, Postdoc at Max Planck Institute Berlin.

289 Followers  |  333 Following  |  5 Posts  |  Joined: 02.02.2024  |  1.5183

Latest posts by larakirfel.bsky.social on Bluesky

"A framework for blaming willlful ignorance" 🫣
-- New review paper with @rzultan.bsky.social and @tobigerstenberg.bsky.social now out in "Current Opinion in Psychology".

10.07.2025 05:07 — 👍 5    🔁 0    💬 0    📌 0
Post image

What makes people judge that someone was forced to take a particular action?

Research by @larakirfel.bsky.social et al suggests one influence on this judgment is people’s representation of what an agent knows is possible:

buff.ly/Z0oaRNk

HT @xphilosopher.bsky.social

17.05.2025 15:09 — 👍 3    🔁 1    💬 0    📌 0
Title: Representations of what’s possible reflect others’ epistemic states

Authors: Lara Kirfel, Matthew Mandelkern, and Jonathan Scott Phillips

Abstract: People’s judgments about what an agent can do are shaped by various constraints, including probability, morality, and normality. However, little is known about how these representations of possible actions—what we call modal space representations—are influenced by an agent’s knowledge of their environment. Across two studies, we investigated whether epistemic constraints systematically shift modal space representations and whether these shifts affect high-level force judgments. Study 1 replicated prior findings that the first actions that come to mind are perceived as the most probable, moral, and normal, and demonstrated that these constraints apply regardless of an agent’s epistemic state. Study 2 showed that limiting an agent’s knowledge changes which actions people perceive to be available for the agent, which in turn affects whether people judged an agent as being “forced” to take a particular action. These findings highlight the role of Theory of Mind in modal cognition, revealing how epistemic constraints shape perceptions of possibilities.

Title: Representations of what’s possible reflect others’ epistemic states Authors: Lara Kirfel, Matthew Mandelkern, and Jonathan Scott Phillips Abstract: People’s judgments about what an agent can do are shaped by various constraints, including probability, morality, and normality. However, little is known about how these representations of possible actions—what we call modal space representations—are influenced by an agent’s knowledge of their environment. Across two studies, we investigated whether epistemic constraints systematically shift modal space representations and whether these shifts affect high-level force judgments. Study 1 replicated prior findings that the first actions that come to mind are perceived as the most probable, moral, and normal, and demonstrated that these constraints apply regardless of an agent’s epistemic state. Study 2 showed that limiting an agent’s knowledge changes which actions people perceive to be available for the agent, which in turn affects whether people judged an agent as being “forced” to take a particular action. These findings highlight the role of Theory of Mind in modal cognition, revealing how epistemic constraints shape perceptions of possibilities.

🏔️ Brad is lost in the wilderness—but doesn’t know there’s a town nearby. Was he forced to stay put?

In our #CogSci2025 paper, we show that judgments of what’s possible—and whether someone had to act—depend on what agents know.

📰 osf.io/preprints/ps...

w/ Matt Mandelkern & @jsphillips.bsky.social

16.05.2025 12:04 — 👍 11    🔁 4    💬 0    📌 0

New paper for #CogSci2025: People cheat more when they delegate to AI. How can we stop this? We tested:

🧠 Explaining what the AI does (transparency)
🗣️ Calling cheating what it is (framing)

Only one worked.

w/ @larakirfel.bsky.social, Anne-Marie Nussberger, Raluca Rilla & @iyadrahwan.bsky.social

12.05.2025 10:20 — 👍 17    🔁 5    💬 0    📌 1
When AI meets counterfactuals: the ethical implications of counterfactual world simulation models

❗Now out in "AI and Ethics"❗

What are the consequences of AI that can reason counterfactually? Our new paper explores the ethical dimensions of AI-driven counterfactual world simulation. 🌎 🤖

With @tobigerstenberg.bsky.social, Rob MacCoun and Thomas Icard.

Link: shorturl.at/bHYEO

14.04.2025 08:50 — 👍 11    🔁 2    💬 0    📌 0
Post image

Does wilful ignorance—intentionally neglecting to ascertain you’re not implicated in criminal activity—make you culpable? Research by @larakirfel.bsky.social & Hannikainen suggests yes, as long as you suspected that may have been the case: https://buff.ly/3XDopzh
HT @xphilosopher.bsky.social

24.02.2025 16:09 — 👍 6    🔁 2    💬 0    📌 0
"Moral AI"?! Navigating Ethical Decisions with Large Language Models | re:publica Die re:publica Berlin ist das Festival für die digitale Gesellschaft und die größte Konferenz ihrer Art in Europa. Die Teilnehmer*innen der re:publica bilden einen Querschnitt unserer (digitalen) Gese...

Hi Republica #rp24,

come catch me at my talk on all things AI and Ethical Decisions tonight at Lightning Box 1, 7.15pm. #WhoCares

re-publica.com/en/node/5223

27.05.2024 10:04 — 👍 12    🔁 0    💬 0    📌 0
Post image

🗣️ People often select only a few events when explaining what happened. What drives people’s explanation selection?

🗞️ In our new paper, we propose a new model and show that people use explanations to communicate effective interventions. #Cogsci2024

🔗 Link to paper: osf.io/preprints/ps...

13.05.2024 08:14 — 👍 10    🔁 0    💬 0    📌 0

Menschliche Moral und was das für KI bedeutet! Diese Woche wird es mit @larakirfel.bsky.social wieder spannen auf @realscide.bsky.social!

#wisskomm #scicomm

05.02.2024 06:23 — 👍 12    🔁 2    💬 0    📌 0

@larakirfel is following 20 prominent accounts