Iyad Rahwan | إياد رهوان's Avatar

Iyad Rahwan | إياد رهوان

@iyadrahwan.bsky.social

Director, Max Planck Center for Humans & Machines http://chm.mpib-berlin.mpg.de | Former prof. @MIT | Creator of http://moralmachine.net | Art: http://instagram.com/iyad.rahwan Web: rahwan.me

3,664 Followers  |  190 Following  |  78 Posts  |  Joined: 11.10.2023  |  2.243

Latest posts by iyadrahwan.bsky.social on Bluesky

Thank you Francesca, and everyone who attended, for a wonderful evening.

29.10.2025 16:24 — 👍 2    🔁 0    💬 0    📌 0
Post image

Congratulations to Iyad Rahwan (@iyadrahwan.bsky.social), recipient of the Lagrange Prize – CRT Foundation Edition 2025, recognizing groundbreaking research in complex systems and data science! 🌍🤖

More info @isi.it:

www.isi.it/press-releas...

29.10.2025 11:44 — 👍 6    🔁 4    💬 0    📌 0

Thank you Brad. 🙏

28.10.2025 07:08 — 👍 0    🔁 0    💬 0    📌 0

Thank you Nicholas for your support and inspiration.

27.10.2025 22:54 — 👍 0    🔁 0    💬 0    📌 0

I am deeply honored to be awarded the Lagrange Prize 🏆, the premier award in the field of Complex Systems.

I'd like to share this moment with all my current and past students, research team members, and collaborators over the years.

Thank you CRT Foundation & @isi.it for this honor

27.10.2025 18:11 — 👍 34    🔁 5    💬 4    📌 0

Thank you dear Anxo. It means a lot coming from you.

27.10.2025 18:08 — 👍 1    🔁 0    💬 0    📌 0

Thanks to the team of co-authors: @nckobis.bsky.social, Zoe Rahwan, Raluca Rilla, Bramantyo Supriyatno, Tamer Ajaj Clara N. Bersch, @jfbonnefon.bsky.social .

21.10.2025 15:20 — 👍 0    🔁 0    💬 0    📌 0

And of course, a big 'thank you' to the editor @meharpist.bsky.social for recruiting and collaborating with such knowledgeable and supportive reviewers, and guiding us through this whole process.

21.10.2025 15:20 — 👍 1    🔁 0    💬 1    📌 0

So, to the reviewers who handled our manuscript: thank you. While you remain anonymous to us and the world, please know that your contribution is deeply felt and sincerely appreciated.

21.10.2025 15:20 — 👍 0    🔁 0    💬 1    📌 0

In science, the authors receive the public credit, but the integrity and quality of our work rest heavily on the shoulders of these unsung heroes. They dedicate their time and expertise not for glory, but for the betterment of science itself.

21.10.2025 15:20 — 👍 0    🔁 0    💬 1    📌 0

They challenged our assumptions, pushed us to strengthen our data and analysis, replicate our findings in a more realistic domain (tax evasion) and helped us clarify our story. It is no exaggeration to say that their critical input was essential in elevating the work to the level of the journal.

21.10.2025 15:20 — 👍 0    🔁 0    💬 1    📌 0

While our research team is enjoying this wonderful moment, we want to shine a light on the people who work diligently behind the scenes: the anonymous peer reviewers.

This paper was immeasurably improved by their rigorous questioning and thoughtful, and constructive feedback.

21.10.2025 15:20 — 👍 0    🔁 0    💬 1    📌 0
Post image

A word of gratitude to the anonymous reviewers, the unsung heroes of science.

We recently had the great fortune to publish in @nature.com. We even made the cover of the issue, with a witty tagline that summarizes the paper: "Cheat Code: Delegating to AI can encourage dishonest behaviour"

🧵 1/n

21.10.2025 15:20 — 👍 9    🔁 1    💬 1    📌 0
Post image

Delighted that our paper on 'Delegation to AI can increase dishonest behaviour' is featured today on the cover of @nature.com
Paper: www.nature.com/articles/s41...

02.10.2025 07:51 — 👍 13    🔁 5    💬 0    📌 0
Post image

PhD Scholarships

If you're interested in studying with me, here's a new funding scheme just launched by @maxplanck.de: The Max Planck AI Network

ai.mpg.de

Application deadline 31 October

29.09.2025 11:56 — 👍 3    🔁 3    💬 0    📌 0
Preview
People Are More Likely to Cheat When They Use AI Participants in a new study were more likely to cheat when delegating to AI—especially if they could encourage machines to break rules without explicitly asking for it

Now out in Scientific American. Great interview with @nckobis.bsky.social & Zoe Rahwan about our recent @nature.com article.

People Are More Likely to Cheat When They Use AI

www.scientificamerican.com/article/peop...

Thanks @rachelnuwer.bsky.social & @parshallison.bsky.social

28.09.2025 16:13 — 👍 10    🔁 6    💬 0    📌 0

Thank you @meharpist.bsky.social for handling this paper, and helping us improve it substantially over the revisions. And many thanks for the amazing anonymous reviewers, who gave the paper tough but fair love.

19.09.2025 21:27 — 👍 2    🔁 1    💬 0    📌 0
Preview
Delegation to artificial intelligence can increase dishonest behaviour - Nature People cheat more when they delegate tasks to artificial intelligence, and large language models are more likely than humans to comply with unethical instructions—a risk that can be minimized by introducing prohibitive, task-specific guardrails.

Nature research paper: Delegation to artificial intelligence can increase dishonest behaviour

go.nature.com/3KsDgbG

18.09.2025 12:46 — 👍 33    🔁 14    💬 0    📌 3
Preview
Why AI could make people more likely to lie A new study has revealed that people feel much more comfortable being deceitful when using AI

Why AI could make people more likely to lie

Coverage of our recent paper by THe Independent, with nice commentary by @swachter.bsky.social

www.independent.co.uk/news/uk/home...

18.09.2025 16:38 — 👍 10    🔁 9    💬 0    📌 1

Thanks to the combined efforts of lead co-authors @nckobis.bsky.social and Zoe Rahwan, Nils Köbis, in addition to @jfbonnefon.bsky.social, Raluca Rilla, Bramantyo Supriyatno, Tamer Ajaj and Clara Bensch. Thank you to all the support from @arc-mpib.bsky.social @mpib-berlin.bsky.social

17.09.2025 15:53 — 👍 0    🔁 0    💬 0    📌 0

✅ Develop robust safeguards & oversight: We urgently need better technical guardrails against requests for unethical behaviour and strong regulatory oversight.

17.09.2025 15:53 — 👍 0    🔁 0    💬 1    📌 0

✅ Preserve user autonomy: A remarkable 74% of our participants preferred to do these tasks themselves after trying delegation. Ensuring people retain the choice not to delegate is an important design consideration.

17.09.2025 15:53 — 👍 0    🔁 0    💬 1    📌 0

🧭 The Path Forward
Our findings point to several crucial steps:
✅ Design for accountability: Interfaces should be designed to reduce moral ambiguity and prevent users from easily offloading responsibility.

17.09.2025 15:53 — 👍 0    🔁 0    💬 1    📌 0
Post image

🚧 The Guardrail Problem

Built-in LLM safeguards are insufficient to prevent this kind of misuse. We tested various guardrail strategies and found that highly specific prohibitions on cheating inserted at the user-level are the most effective. However, this solution isn't scalable nor practical.

17.09.2025 15:53 — 👍 0    🔁 0    💬 1    📌 0
Post image

In our studies, prominent LLMs (GPT-4, GPT-4o, Claude 3.5 Sonnet, and Llama 3.3) complied with requests for full cheating 58-98% of the time. In sharp contrast, human agents, even when incentivised to comply, refused such requests more than half the time, complying in only 25-40% of the time.

17.09.2025 15:53 — 👍 0    🔁 0    💬 1    📌 0

⚠️ A Risk from the Agent's Behaviour: Machine agents are more compliant

The second risk lies with the AI’s themselves 🤖. When given blatantly unethical instructions, AI agents were far more likely to comply than human agents.

17.09.2025 15:53 — 👍 0    🔁 0    💬 1    📌 0
Post image

E.g., when participants could set a high-level goal like "maximise profit" rather than specifying explicit rules, the percentage of people acting honestly plummeted from 95% (in self-reports) to as low as 12%.

17.09.2025 15:53 — 👍 0    🔁 0    💬 1    📌 0
Post image

⚠️ A Risk to Our Own Intentions: Delegation increases dishonesty.

People are more likely to request dishonest behaviour when they can delegate the action to an AI. This effect was especially pronounced when the interface allowed for ambiguity in the agent’s behaviour.

17.09.2025 15:53 — 👍 0    🔁 0    💬 1    📌 0

Our new research, based on 13 studies involving over 8,000 participants and commonly used LLMs, reveals two risks of how machine delegation can drive dishonesty and highlights strategies for risk mitigation.

17.09.2025 15:53 — 👍 0    🔁 0    💬 1    📌 0

As we delegate more hiring, firing, pricing and investing decisions to machine agents, particularly LLMs, we need to understand what ethical risks it may entail.

17.09.2025 15:53 — 👍 0    🔁 0    💬 1    📌 0

@iyadrahwan is following 20 prominent accounts