Thank you Francesca, and everyone who attended, for a wonderful evening.
29.10.2025 16:24 — 👍 2 🔁 0 💬 0 📌 0@iyadrahwan.bsky.social
Director, Max Planck Center for Humans & Machines http://chm.mpib-berlin.mpg.de | Former prof. @MIT | Creator of http://moralmachine.net | Art: http://instagram.com/iyad.rahwan Web: rahwan.me
Thank you Francesca, and everyone who attended, for a wonderful evening.
29.10.2025 16:24 — 👍 2 🔁 0 💬 0 📌 0Congratulations to Iyad Rahwan (@iyadrahwan.bsky.social), recipient of the Lagrange Prize – CRT Foundation Edition 2025, recognizing groundbreaking research in complex systems and data science! 🌍🤖
More info @isi.it:
www.isi.it/press-releas...
Thank you Brad. 🙏
28.10.2025 07:08 — 👍 0 🔁 0 💬 0 📌 0Thank you Nicholas for your support and inspiration.
27.10.2025 22:54 — 👍 0 🔁 0 💬 0 📌 0I am deeply honored to be awarded the Lagrange Prize 🏆, the premier award in the field of Complex Systems.
I'd like to share this moment with all my current and past students, research team members, and collaborators over the years.
Thank you CRT Foundation & @isi.it for this honor
Thank you dear Anxo. It means a lot coming from you.
27.10.2025 18:08 — 👍 1 🔁 0 💬 0 📌 0Thanks to the team of co-authors: @nckobis.bsky.social, Zoe Rahwan, Raluca Rilla, Bramantyo Supriyatno, Tamer Ajaj Clara N. Bersch, @jfbonnefon.bsky.social .
21.10.2025 15:20 — 👍 0 🔁 0 💬 0 📌 0And of course, a big 'thank you' to the editor @meharpist.bsky.social for recruiting and collaborating with such knowledgeable and supportive reviewers, and guiding us through this whole process.
21.10.2025 15:20 — 👍 1 🔁 0 💬 1 📌 0So, to the reviewers who handled our manuscript: thank you. While you remain anonymous to us and the world, please know that your contribution is deeply felt and sincerely appreciated.
21.10.2025 15:20 — 👍 0 🔁 0 💬 1 📌 0In science, the authors receive the public credit, but the integrity and quality of our work rest heavily on the shoulders of these unsung heroes. They dedicate their time and expertise not for glory, but for the betterment of science itself.
21.10.2025 15:20 — 👍 0 🔁 0 💬 1 📌 0They challenged our assumptions, pushed us to strengthen our data and analysis, replicate our findings in a more realistic domain (tax evasion) and helped us clarify our story. It is no exaggeration to say that their critical input was essential in elevating the work to the level of the journal.
21.10.2025 15:20 — 👍 0 🔁 0 💬 1 📌 0While our research team is enjoying this wonderful moment, we want to shine a light on the people who work diligently behind the scenes: the anonymous peer reviewers.
This paper was immeasurably improved by their rigorous questioning and thoughtful, and constructive feedback.
A word of gratitude to the anonymous reviewers, the unsung heroes of science.
We recently had the great fortune to publish in @nature.com. We even made the cover of the issue, with a witty tagline that summarizes the paper: "Cheat Code: Delegating to AI can encourage dishonest behaviour"
🧵 1/n
Delighted that our paper on 'Delegation to AI can increase dishonest behaviour' is featured today on the cover of @nature.com
Paper: www.nature.com/articles/s41...
PhD Scholarships
If you're interested in studying with me, here's a new funding scheme just launched by @maxplanck.de: The Max Planck AI Network
ai.mpg.de
Application deadline 31 October
Now out in Scientific American. Great interview with @nckobis.bsky.social & Zoe Rahwan about our recent @nature.com article.
People Are More Likely to Cheat When They Use AI
www.scientificamerican.com/article/peop...
Thanks @rachelnuwer.bsky.social & @parshallison.bsky.social
Thank you @meharpist.bsky.social for handling this paper, and helping us improve it substantially over the revisions. And many thanks for the amazing anonymous reviewers, who gave the paper tough but fair love.
19.09.2025 21:27 — 👍 2 🔁 1 💬 0 📌 0Nature research paper: Delegation to artificial intelligence can increase dishonest behaviour
go.nature.com/3KsDgbG
Why AI could make people more likely to lie
Coverage of our recent paper by THe Independent, with nice commentary by @swachter.bsky.social
www.independent.co.uk/news/uk/home...
Thanks to the combined efforts of lead co-authors @nckobis.bsky.social and Zoe Rahwan, Nils Köbis, in addition to @jfbonnefon.bsky.social, Raluca Rilla, Bramantyo Supriyatno, Tamer Ajaj and Clara Bensch. Thank you to all the support from @arc-mpib.bsky.social @mpib-berlin.bsky.social
17.09.2025 15:53 — 👍 0 🔁 0 💬 0 📌 0✅ Develop robust safeguards & oversight: We urgently need better technical guardrails against requests for unethical behaviour and strong regulatory oversight.
17.09.2025 15:53 — 👍 0 🔁 0 💬 1 📌 0✅ Preserve user autonomy: A remarkable 74% of our participants preferred to do these tasks themselves after trying delegation. Ensuring people retain the choice not to delegate is an important design consideration.
17.09.2025 15:53 — 👍 0 🔁 0 💬 1 📌 0🧭 The Path Forward
Our findings point to several crucial steps:
✅ Design for accountability: Interfaces should be designed to reduce moral ambiguity and prevent users from easily offloading responsibility.
🚧 The Guardrail Problem
Built-in LLM safeguards are insufficient to prevent this kind of misuse. We tested various guardrail strategies and found that highly specific prohibitions on cheating inserted at the user-level are the most effective. However, this solution isn't scalable nor practical.
In our studies, prominent LLMs (GPT-4, GPT-4o, Claude 3.5 Sonnet, and Llama 3.3) complied with requests for full cheating 58-98% of the time. In sharp contrast, human agents, even when incentivised to comply, refused such requests more than half the time, complying in only 25-40% of the time.
17.09.2025 15:53 — 👍 0 🔁 0 💬 1 📌 0⚠️ A Risk from the Agent's Behaviour: Machine agents are more compliant
The second risk lies with the AI’s themselves 🤖. When given blatantly unethical instructions, AI agents were far more likely to comply than human agents.
E.g., when participants could set a high-level goal like "maximise profit" rather than specifying explicit rules, the percentage of people acting honestly plummeted from 95% (in self-reports) to as low as 12%.
17.09.2025 15:53 — 👍 0 🔁 0 💬 1 📌 0⚠️ A Risk to Our Own Intentions: Delegation increases dishonesty.
People are more likely to request dishonest behaviour when they can delegate the action to an AI. This effect was especially pronounced when the interface allowed for ambiguity in the agent’s behaviour.
Our new research, based on 13 studies involving over 8,000 participants and commonly used LLMs, reveals two risks of how machine delegation can drive dishonesty and highlights strategies for risk mitigation.
17.09.2025 15:53 — 👍 0 🔁 0 💬 1 📌 0As we delegate more hiring, firing, pricing and investing decisions to machine agents, particularly LLMs, we need to understand what ethical risks it may entail.
17.09.2025 15:53 — 👍 0 🔁 0 💬 1 📌 0