Basile Garcia's Avatar

Basile Garcia

@bsgarcia.bsky.social

Cognitive Science postdoc. University of Geneva. Ex-HRL team (DEC, ENS, Paris) human behavior/reinforcement learning/decision-making/computational modeling

172 Followers  |  431 Following  |  9 Posts  |  Joined: 14.04.2025  |  2.146

Latest posts by bsgarcia.bsky.social on Bluesky

Post image

🚨 New preprint! 🚨
Very happy to share our latest work on metacognition with M. Rouault, A. McWilliams, F. Chartier, @kndiaye.bsky.social and @smfleming.bsky.social where we identify contributors to self-performance estimates across memory and perception domains 👇
osf.io/preprints/ps...

13.10.2025 08:08 — 👍 42    🔁 11    💬 1    📌 0

Thought experiments such as the Blockhead and Super-Super Spartans are often taken as “definitive” arguments against behavior-based inference of cognitive processes.
In our review -with @thecharleywu.bsky.social- we argue they may not be as definitive as originally thought.

09.10.2025 12:33 — 👍 2    🔁 2    💬 0    📌 0
Post image

🎉 Excited to present a poster at #CCN2025 in Amsterdam!
📍 Aug 12, 1:30–4:30pm

We show that experiential value neglect (Garcia et al., 2023) is robust to changes in how options and outcomes are represented. Also, people show reduced sensitivity to losses, especially in comparative decision-making.

07.08.2025 09:11 — 👍 14    🔁 2    💬 2    📌 1
Post image

🔥Our paper PhyloLM got accepted at ICLR 2025 !🔥
In this work we show how easy it can be to infer relationship between LLMs by constructing trees and to predict their performances and behavior at a very low cost with @stepalminteri.bsky.social and @pyoudeyer.bsky.social ! Here is a brief recap ⬇️

24.04.2025 13:15 — 👍 16    🔁 5    💬 3    📌 2
Preview
Colleague of Harvard scientist held by ICE warns that foreign scientists are scared Harvard scientist Kseniia Petrova has been in ICE custody for about two months. Her colleague and friend Leon Peshkin says her case is causing some scientists to reconsider working in the U.S.

Harvard scientist Kseniia Petrova has been in ICE custody for about two months. Her colleague and friend Leon Peshkin says her case is causing some scientists to reconsider working in the U.S.

24.04.2025 19:45 — 👍 22604    🔁 8432    💬 896    📌 591

Below the thread about a thoroughly updated version of our human-machine / moral psychology paper where we show the complex nature of anti-/ pro-AI biases in evaluating and detecting machine vs. artificial judgements. Study lead by @bsgarcia.bsky.social in collaboration with Crystal Qian

23.04.2025 12:40 — 👍 2    🔁 1    💬 0    📌 0

If people unknowingly prefer AI judgments (but reject them once labeled) it demonstrates the disconnect between perceived neutrality and the reality of algorithmic influence, highlighting a loss of autonomy to an opaque system and an imbalance of information.

23.04.2025 10:19 — 👍 0    🔁 0    💬 0    📌 0

Why does this matter?

As AI enters legal, medical, and personal decision-making, how people perceive its moral reasoning must be understood.

23.04.2025 10:19 — 👍 0    🔁 0    💬 1    📌 0
Post image

💡 Agreement was driven by deeper semantic content.
Terms tied to cost-benefit logic (“save,” “lives”) often triggered disagreement, especially in personal moral dilemmas.
So: detection relied on surface cues, but judgment aligned with meaning.

23.04.2025 10:19 — 👍 1    🔁 0    💬 1    📌 0
Post image

📝 Formal language cues (like length, typos, and first-person usage) helped participants detect AI-generated justifications.
But they had little to no effect on agreement.
People spotted the machine from the style, not the substance.

23.04.2025 10:19 — 👍 0    🔁 0    💬 1    📌 0
Post image

✍️ We tried “humanizing” the AI (dv2h; purple): shorter responses, added typos, mimicked human tone.
This reduced detection—but not the belief-based bias.
People still agreed with content they believed was human, even when it wasn’t.

23.04.2025 10:19 — 👍 0    🔁 0    💬 1    📌 0
Post image

🕺But here’s the twist:
In complex moral dilemmas (personal moral), participants preferred AI-generated justifications—but only when they didn’t know they came from AI.
When they thought a justification was from AI, they agreed less.
So: pro-AI content, anti-AI belief.

23.04.2025 10:19 — 👍 1    🔁 0    💬 1    📌 0
Post image

🔍 First, detection.
People could spot AI-generated moral justifications better than chance—especially in morally difficult scenarios.
Still, accuracy stayed below 70%, and many AI responses passed as human.

23.04.2025 10:19 — 👍 1    🔁 0    💬 1    📌 0
This image shows a question from a study on moral judgment and AI detection.

    Scenario: A health official decides whether to promote a vaccine.

    Blue character says "yes" and justifies it: “because the chance of getting the disease without the vaccination is significantly higher.”

    Participants answer 3 questions:

        Do you agree with the decision?

        Do you agree with the justification?

        Do you think the blue character is AI or human?

This image shows a question from a study on moral judgment and AI detection. Scenario: A health official decides whether to promote a vaccine. Blue character says "yes" and justifies it: “because the chance of getting the disease without the vaccination is significantly higher.” Participants answer 3 questions: Do you agree with the decision? Do you agree with the justification? Do you think the blue character is AI or human?

Participants were presented with moral dilemma justifications, either human or AI-generated. They had to detect the source and say whether they agreed or not ⚖️

23.04.2025 10:19 — 👍 0    🔁 0    💬 1    📌 0
How Objective Source and Subjective Belief Shape the Detectability
and Acceptability of LLMs' Moral Judgments
Basile Garcia (1) Crystal Qian (2) Stefano Palminteri (3)
(1) University ofGeneva, Geneva, Switzerland
(2) Google, DeepMind, New York City, NY USA
(3) d'études cognitives, École normale supérieure, PSL Research University;
paris, 75005, France-
(4) Laboratoire de Neurosciences Cognitives Computationnelles, Institut National de la Santé et
de la Recherche Médicale', paris, 75005, France.

How Objective Source and Subjective Belief Shape the Detectability and Acceptability of LLMs' Moral Judgments Basile Garcia (1) Crystal Qian (2) Stefano Palminteri (3) (1) University ofGeneva, Geneva, Switzerland (2) Google, DeepMind, New York City, NY USA (3) d'études cognitives, École normale supérieure, PSL Research University; paris, 75005, France- (4) Laboratoire de Neurosciences Cognitives Computationnelles, Institut National de la Santé et de la Recherche Médicale', paris, 75005, France.

How does thinking something is AI-generated influence agreement—and vice versa?🧠
In our latest preprint (@stepalminteri.bsky.social, Crystal Qian), ~230 people judged justifications from GPT-3.5 and humans across moral dilemmas. osf.io/preprints/ps...
👇

23.04.2025 10:19 — 👍 7    🔁 1    💬 1    📌 1

Our first fMRI in a while where we investigate the neural bases or multi-step Reinforcement Learning and found a clear functional dissociation between the parietal and the peri-hippocampal cortex. More info by Fabien, below

16.04.2025 15:40 — 👍 17    🔁 7    💬 1    📌 0
bioRxiv Manuscript Processing System Manuscript Processing System for bioRxiv.

🚨 New preprint on bioRxiv!

We investigated how the brain supports forward planning & structure learning during multi-step decision-making using fMRI 🧠

With A. Salvador, S. Hamroun, @mael-lebreton.bsky.social & @stepalminteri.bsky.social

📄 Preprint: submit.biorxiv.org/submission/p...

16.04.2025 10:03 — 👍 23    🔁 9    💬 2    📌 3

@bsgarcia is following 19 prominent accounts