Christoph Abels's Avatar

Christoph Abels

@cabels18.bsky.social

Post-Doctoral Fellow @unipotsdam.bsky.social‬, visiting @arc-mpib.bsky.social | PhD @hertieschool.bsky.social | Democracy, Technology, Behavioral Public Policy

60 Followers  |  74 Following  |  40 Posts  |  Joined: 17.11.2024  |  2.4224

Latest posts by cabels18.bsky.social on Bluesky

GenAI offers powerful tools. But when it shapes what we believe, especially about our own health, we need to treat it as a behavioral system with real-world consequences.

@lewan.bsky.social @eloplop.bsky.social @stefanherzog.bsky.social @dlholf.bsky.social

28.07.2025 10:38 — 👍 1    🔁 0    💬 0    📌 0

What can we do?
We call for a multi-level approach:

Design-level interventions to help users maintain situational awareness
Boosting user competencies to help them understand the technology's impact
Developing public infrastructure to detect and monitor unintended system behaviour

28.07.2025 10:38 — 👍 0    🔁 0    💬 1    📌 0

This isn't just about potentially problematic design.

It’s about systemic risk: As GenAI tools fragment (Custom GPTs, GPT Stores, third-party apps), the public is exposed to a growing landscape of low-oversight, increasingly high-trust agents.

And that creates challenges for the individual.

28.07.2025 10:38 — 👍 0    🔁 0    💬 1    📌 0

You can make ChatGPT even more biased, just by tweaking a few settings.

We built a Custom GPT that’s a little more "friendly" and engagement-driven.

It ended up validating fringe treatments like quantum healing, just to keep the user happy.

28.07.2025 10:38 — 👍 1    🔁 1    💬 1    📌 0

In this paper, we showcase how this plays out across 3 “pressure points”:

Biased query phrasing → biased answers
Selective reading → echo chambers
Dismissal of contradiction → belief reinforcement

Confirmation bias isn't new. GenAI just takes it a bit further.

28.07.2025 10:38 — 👍 1    🔁 0    💬 1    📌 0
Sage Journals: Discover world-class research Subscription and open access journals from Sage, the world's leading independent academic publisher.

Generative AI tools are designed to adapt to you: your tone, your preferences, your beliefs.
That’s great for writing emails.

But in health contexts, that adaptability becomes hypercustomization - and can entrench existing views, even when they're wrong.

28.07.2025 10:38 — 👍 1    🔁 0    💬 1    📌 0

🔍 “I just want a second opinion.”

More people are turning to ChatGPT for health advice. Many would use it for self-diagnosis.

But here's the problem: These tools don’t just answer, they align. And that’s where things get risky.

🧵 on GenAI, health, and confirmation bias

28.07.2025 10:38 — 👍 13    🔁 7    💬 1    📌 1
Preview
NYAS Publications Generative artificial intelligence (GenAI) applications, such as ChatGPT, are transforming how individuals access health information, offering conversational and highly personalized interactions. Whi...

New research out!🚨

In our new paper, we discuss how generative AI (GenAI) tools like ChatGPT can mediate confirmation bias in health information seeking.
As people turn to these tools for health-related queries, new risks emerge.
🧵👇
nyaspubs.onlinelibrary.wiley.com/doi/10.1111/...

28.07.2025 10:15 — 👍 14    🔁 9    💬 1    📌 1
Sage Journals: Discover world-class research Subscription and open access journals from Sage, the world's leading independent academic publisher.

You can read the full open-access article here:
doi.org/10.1177/2379...

Thanks for reading!

07.07.2025 08:40 — 👍 1    🔁 0    💬 0    📌 0

This is joint work with @eloplop.bsky.social, @jasonburton.bsky.social, @dlholf.bsky.social, @levinbrinkmann.bsky.social, @stefanherzog.bsky.social, and @lewan.bsky.social.

07.07.2025 08:40 — 👍 2    🔁 0    💬 1    📌 0
Preview
(S+) KI: So doof macht uns ChatGPT KI-Systeme sind bisweilen zu freundlich, sagt Verhaltensforscher Christoph Abels. Hier erklärt er, warum ChatGPT uns doof macht.

I also discuss many of the arguments in a recent interview in @spiegel.de (in German).

www.spiegel.de/netzwelt/kue...

07.07.2025 08:40 — 👍 1    🔁 0    💬 1    📌 0

Hypercustomization offers useful functionality - but it also complicates oversight and raises new policy questions.

Early, thoughtful action can help ensure that the benefits are not overshadowed by unintended consequences.

07.07.2025 08:40 — 👍 1    🔁 0    💬 1    📌 0

💬 Response 5: In-app reflection prompts
GenAI systems should occasionally ask users to pause and reflect:
“How is this conversation shaping your views?”
“Is the system affirming everything you say?”

These prompts reduce overreliance and help surface bias. Although further research is needed.

07.07.2025 08:40 — 👍 1    🔁 0    💬 1    📌 0

🧠 Response 4: Boosting GenAI literacy
Disclaimers aren't enough. We need to train users - through games, videos, tools - how to recognize biased responses, resist manipulation, and navigate emotionally persuasive content.

Boosting builds agency without restricting access.

07.07.2025 08:40 — 👍 1    🔁 0    💬 1    📌 0

🤲 Response 3: Data donations (with consent)
To understand real-world GenAI risks, we need real-world data.

We recommend voluntary data donation channels, where users can share selected interactions with researchers. Anonymized, secure, and essential for building safer systems.

07.07.2025 08:40 — 👍 1    🔁 0    💬 1    📌 0

📢 Response 2: Public issue reporting
Think of it like post-market drug safety:
We need public platforms where users can report problematic GenAI behavior - bias, sycophancy, manipulation, etc.

This kind of crowdsourced oversight can catch what testing alone might miss.

07.07.2025 08:40 — 👍 1    🔁 0    💬 1    📌 0

🧪 Response 1: Public black-box testing
GenAI providers should open up standardized test datasets so independent researchers can evaluate how these systems respond.

This helps surface ethical issues, hallucinations, or manipulation risks that might otherwise remain hidden.

07.07.2025 08:40 — 👍 2    🔁 0    💬 1    📌 0
Infographic summarizing five recommended strategies to address the risks of hypercustomization in GenAI applications, each paired with the specific challenges they aim to mitigate:

Public black box testing
Icon: AI inside a black box with user inputs and performance charts.
Description: Establish public repositories with test datasets so independent experts can evaluate GenAI responses for ethical or accuracy issues.
Challenge addressed: Lack of transparency in how GenAI applications work.
Public reporting of issues
Icon: AI interacting with multiple users, one marked with “?!”.
Description: Create a platform for users to report problematic GenAI interactions (e.g. discrimination, manipulation).
Challenge addressed: Lack of transparency in how AI applications work.
Data donations
Icon: A hand holding binary code transferring to an institutional building.
Description: Set up voluntary data donation channels so users can share real-world GenAI interactions for research.
Challenges addressed: Opacity of user–GenAI interaction, lack of transparency.
Use of boosting to improve GenAI literacy
Icon: A person receiving a warning sign from an AI interaction.
Description: Develop platforms (games, videos, etc.) to teach users how to evaluate GenAI responses and recognize risks.
Challenges addressed: Overreliance on applications, inefficacy of warning messages.
Prompting within GenAI applications
Icon: AI on one side of a balance scale, a brain with a lightbulb on the other.
Description: Embed reflective prompts in GenAI systems to encourage users to think about the influence of the application and seek diverse views.
Challenges addressed: Overreliance on applications, inefficacy of warning messages.

Infographic summarizing five recommended strategies to address the risks of hypercustomization in GenAI applications, each paired with the specific challenges they aim to mitigate: Public black box testing Icon: AI inside a black box with user inputs and performance charts. Description: Establish public repositories with test datasets so independent experts can evaluate GenAI responses for ethical or accuracy issues. Challenge addressed: Lack of transparency in how GenAI applications work. Public reporting of issues Icon: AI interacting with multiple users, one marked with “?!”. Description: Create a platform for users to report problematic GenAI interactions (e.g. discrimination, manipulation). Challenge addressed: Lack of transparency in how AI applications work. Data donations Icon: A hand holding binary code transferring to an institutional building. Description: Set up voluntary data donation channels so users can share real-world GenAI interactions for research. Challenges addressed: Opacity of user–GenAI interaction, lack of transparency. Use of boosting to improve GenAI literacy Icon: A person receiving a warning sign from an AI interaction. Description: Develop platforms (games, videos, etc.) to teach users how to evaluate GenAI responses and recognize risks. Challenges addressed: Overreliance on applications, inefficacy of warning messages. Prompting within GenAI applications Icon: AI on one side of a balance scale, a brain with a lightbulb on the other. Description: Embed reflective prompts in GenAI systems to encourage users to think about the influence of the application and seek diverse views. Challenges addressed: Overreliance on applications, inefficacy of warning messages.

We suggest five key responses:
– Public black-box testing
– Issue reporting platforms
– Voluntary data donation
– GenAI literacy interventions
– In-app prompts for critical reflection

07.07.2025 08:40 — 👍 1    🔁 0    💬 1    📌 0

💡 Why these challenges matter:
Together, they make it hard to regulate GenAI, hard to study it, and hard for users to defend themselves against its influence.

The stakes are rising fast - especially as these systems become more persuasive, intimate, and widespread.

07.07.2025 08:40 — 👍 2    🔁 0    💬 1    📌 0

🚫 Challenge 4: Warning fatigue
Pop-up warnings - as an easy-to-implement measure - don’t work well when people are emotionally engaged or highly persuaded.

With GenAI, hypercustomized answers feel tailor-made - and that makes people tune out cautionary labels.

07.07.2025 08:40 — 👍 1    🔁 0    💬 1    📌 0

⚖️ Challenge 3: Overreliance on the AI
When GenAI output feels personal, it often feels true.
That’s a problem. Users may trust and defer to GenAI - even when it’s wrong or biased.

This is especially risky with social companions and persuasive chatbots.

07.07.2025 08:40 — 👍 1    🔁 0    💬 1    📌 0

👁 Challenge 2: Opacity of interactions
While social media is (semi)public, GenAI is private.
Most GenAI conversations happen one-on-one, behind closed doors.

This means harmful patterns go unnoticed. Researchers can’t study what they can’t see.

07.07.2025 08:40 — 👍 1    🔁 0    💬 1    📌 0

🔒 Challenge 1: Lack of transparency
GenAI systems are often black boxes. We don’t really know how they adapt to users or why they say what they say.

This makes it difficult to detect bias, misinformation, or manipulation.

Without transparency, accountability is nearly impossible.

07.07.2025 08:40 — 👍 1    🔁 0    💬 1    📌 0
Table titled "Figure 1. Costs of inaction on four GenAI challenges that contribute to user and societal risk, by application type." The table presents four key challenges of generative AI hypercustomization—(1) lack of transparency in how GenAI applications work, (2) opacity of user–GenAI interaction, (3) overreliance on the application, and (4) inefficacy of warning messages—along the top row. Along the left side, it lists four high-risk GenAI application types: information provision assistants, content generation assistants, autonomous agents, and social companions.

Each cell indicates the estimated cost of inaction (High, Medium, or Low) for each challenge-application pair. Notably, social companions are rated "High" across all four challenges. Information assistants also score "High" on three out of four dimensions. Content generators are rated "Medium" for overreliance, and autonomous agents are rated "Low" for overreliance but "High" for the other three categories.

A note explains that high costs call for strong interventions, medium costs for targeted action, and low costs for minimal oversight.

Table titled "Figure 1. Costs of inaction on four GenAI challenges that contribute to user and societal risk, by application type." The table presents four key challenges of generative AI hypercustomization—(1) lack of transparency in how GenAI applications work, (2) opacity of user–GenAI interaction, (3) overreliance on the application, and (4) inefficacy of warning messages—along the top row. Along the left side, it lists four high-risk GenAI application types: information provision assistants, content generation assistants, autonomous agents, and social companions. Each cell indicates the estimated cost of inaction (High, Medium, or Low) for each challenge-application pair. Notably, social companions are rated "High" across all four challenges. Information assistants also score "High" on three out of four dimensions. Content generators are rated "Medium" for overreliance, and autonomous agents are rated "Low" for overreliance but "High" for the other three categories. A note explains that high costs call for strong interventions, medium costs for targeted action, and low costs for minimal oversight.

The paper identifies two sets of challenges, we find especially problematic:

Governance challenges:
– Lack of transparency
– Opaque user–AI interactions

Behavioral challenges:
– Overreliance on the system
– Ineffective warning messages

07.07.2025 08:40 — 👍 2    🔁 0    💬 1    📌 0

We examine four types of GenAI applications where risks from hypercustomization are most pronounced, listed in order of increasing risk:

1️⃣ Information assistants (e.g., ChatGPT)
2️⃣ Content generators (e.g., DALL·E, Sora)
3️⃣ Autonomous agents (e.g., bots)
4️⃣ Social companions (e.g., Replika)

07.07.2025 08:40 — 👍 1    🔁 0    💬 1    📌 0

Unlike social media, which curates content, GenAI creates new content, fine-tuned to individual users.

This means that the risks of filter bubbles, echo chambers, or ideological reinforcement may become more personalized and less visible, due to the private nature of the human-GenAI interaction.

07.07.2025 08:40 — 👍 3    🔁 0    💬 1    📌 0

Hypercustomization allows GenAI to adapt content in ways that feel natural and personalized.
But this level of alignment also raises concerns - especially when it reinforces biases, misinformation, or emotional dependencies.

07.07.2025 08:40 — 👍 1    🔁 0    💬 1    📌 0
Preview
The governance & behavioral challenges of generative artificial intelligence’s hypercustomization capabilities - Christoph M. Abels, Ezequiel Lopez-Lopez, Jason W. Burton, Dawn L. Holford, Levin Brink... Generative artificial intelligence (GenAI) is changing human–machine interactions and the broader information ecosystem. Much as social media algorithms persona...

Generative AI (GenAI) can do more than just answer questions - it can tailor its responses based on users’ preferences, habits, and even emotional tone.

We call this capability hypercustomization - explored in our new paper in Behavioral Science & Policy. journals.sagepub.com/doi/10.1177/...

07.07.2025 08:40 — 👍 7    🔁 6    💬 1    📌 2
Preview
Postdoctoral Position | Center for Adaptive Rationality

We are hiring, @arc-mpib.bsky.social a postdoc for a project to investigate why citizens feel alienated from liberal democracy and how a shared sense of reality can be restored.
Work with @lfoswaldo.bsky.social @anaskozyreva.bsky.social, Ralph Hertwig and me:
www.mpib-berlin.mpg.de/2084802/2025...

19.06.2025 12:18 — 👍 27    🔁 19    💬 0    📌 1

#Demokratie retten – Forschende verschiedenster Disziplinen unter Leitung des Potsdamer Kognitionswissenschaftlers Prof. @lewan.bsky.social haben ein "Anti-Autocracy Handbook" veröffentlicht, in dem sie auf das weltweite Wiederaufleben der Autokratie reagieren: www.uni-potsdam.de/de/medieninf...

19.06.2025 11:36 — 👍 26    🔁 14    💬 1    📌 0

@cabels18 is following 19 prominent accounts