Photo of the first question of the recent national consultation in Hungary.
The Hungarian government often mails people “National Consultations” paid for by taxpayers, asking violently condescending questions:
“Do you wanna raise taxes?”
A: “Yes, I like paying more to fund those who don’t work.”
B: “No.”
Then they use the results to fake support for their agenda…
12.10.2025 16:48 — 👍 0 🔁 0 💬 0 📌 0
Cartography of generative AI
Just came across this mesmerising art by Estampa of the components of GenAI interacting with humanity: cartography-of-generative-ai.net
Reminds me a little of Kate Crawford and Vladan Joler's Anatomy of an AI System: anatomyof.ai
11.10.2025 18:29 — 👍 0 🔁 0 💬 0 📌 0
Our CDT is based in the Edinburgh Futures Institute – the University of Edinburgh’s brand new hub for research, innovation and teaching focused on socially just artificial intelligence and data.
Please share!
We have a number of fully funded PhD studentships in "Designing Responsible Natural Language Processing". I'm a possible supervisor & I'd be keen to support projects on sociolinguistics-AI, e.g., accent bias in AI, language+gender/sexuality+AI.
www.responsiblenlp.org
10.10.2025 15:03 — 👍 18 🔁 20 💬 0 📌 0
Ironically, the “AI scientist”-written paper that was accepted to an ICLR workshop was about LSTMs and the paper didn’t cite Schmidhuber. AI scientists should have a Schmidhuber agent that verifies these important details in the future.
07.10.2025 21:23 — 👍 0 🔁 0 💬 0 📌 0
Never ask a man his age, a woman her salary, or GPT-5 whether a seahorse emoji exists
06.09.2025 13:08 — 👍 2111 🔁 425 💬 99 📌 81
A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.
Adam Raine, 16, died from suicide in April after months on ChatGPT discussing plans to end his life. His parents have filed the first known case against OpenAI for wrongful death.
Overwhelming at times to work on this story, but here it is. My latest on AI chatbots: www.nytimes.com/2025/08/26/t...
26.08.2025 13:01 — 👍 4645 🔁 1744 💬 113 📌 579
"I have wait a long time for this moment, my little red friend." - Emperor Palpatine, probably
25.08.2025 12:10 — 👍 2 🔁 0 💬 1 📌 0
The Art of Fauna: Cozy Puzzles
Discover the wonders of nature with this cozy puzzle game. Download Now!
Having been playing The Art of Fauna while procrastinating, and it is the most relaxing game with beautiful illustrations of animals. I cannot recommend it enough: theartof.app/fauna/
25.08.2025 12:06 — 👍 3 🔁 0 💬 0 📌 0
For a week, we visited @iyadrahwan.bsky.social's Center for Humans and Machines in Berlin, where we have met an incredible array of interdisciplinary researchers.
Special thanks to @neeleengelmann.bsky.social for hosting us and @alice-ross.bsky.social for organising the trip from the start!
16.08.2025 20:24 — 👍 5 🔁 0 💬 0 📌 0
Have been neglecting Bluesky recently, so I am happy to share a big update 🎉
I will join @cmu.edu as a postdoc in September working with the incomparable @atoosakz.bsky.social and Nihar Shah on understanding risks from LLM co-scientists. If you are in Pittsburgh, I would love to connect!
16.08.2025 20:12 — 👍 6 🔁 0 💬 1 📌 0
The Edinburgh RL & Agents reading group is back in action after a hiatus. Previous speakers come for all across the world, including DeepMind, CMU, Oxford, NUS, etc. Sign up for some great discussions about cutting-edge RL and agents research.
10.07.2025 14:09 — 👍 3 🔁 0 💬 0 📌 0
The absolute state of peer review...
05.07.2025 21:18 — 👍 4 🔁 0 💬 0 📌 0
Orban: Budapest Pride ist verboten.
Budapest Pride: Liebe kannst du nicht verbieten♥️🏳️🌈
Angeblich mehr als 500.000 Menschen vor Ort.
28.06.2025 15:00 — 👍 2924 🔁 655 💬 62 📌 63
Shannon Vallor and Fabio Tollon on stage presenting their landscape study of responsible AI
The @braiduk.bsky.social gathering did an amazing job with presenting artists and researchers who address real-world questions around AI by actually engaging with people and learning from them. After hearing two weeks of technical talks at CHAI and RLDM, this was a most welcome break of pace.
19.06.2025 17:50 — 👍 2 🔁 1 💬 0 📌 0
Members of the Edinburgh RL group in front of the RLDM poster
I had the most amazing time at RLDM learning a lot about RL and agent foundations, catching up with and meeting new friends.
Two things that really stood out to me are:
- Agency is Frame Dependent by from Dave Abel
- Rethinking Foundations of Continual RL by Michael Bowling
#RLDM2025
14.06.2025 17:04 — 👍 3 🔁 0 💬 0 📌 0
I am heading to RLDM in Dublin this week to present our work on objective evaluation metrics for explainable RL. Hit me up there or send me a DM to connect if you are around.
09.06.2025 18:37 — 👍 2 🔁 0 💬 0 📌 0
Flowchart of the AXIS algorithm with 5 parts. The top-left has the memory, the centre-left has the user query, the centre-bottom has the final explanation, the centre has the LLM, and the right has the multi-agent simulator.
Screenshot of the arXiv paper
Preprint alert 🎉 Introducing the Agentic eXplanations via Interrogative Simulations (AXIS) algo.
AXIS integrates multi-agent simulators with LLMs by having the LLMs interrogate the simulator with counterfactual queries over multiple rounds for explaining agent behaviour.
arxiv.org/pdf/2505.17801
30.05.2025 14:35 — 👍 8 🔁 1 💬 0 📌 0
What can policy makers learn from “AI safety for everyone” (Read here: www.nature.com/articles/s42... ; joint work with @gbalint.bsky.social )? I wrote about some policy lessons for Tech Policy Press.
23.05.2025 20:32 — 👍 17 🔁 6 💬 0 📌 0
YouTube video by Michael Winikoff
A Scoresheet for Explainable AI
Delighted to share our most recent paper: "A Scoresheet for Explainable AI" (with John & Sebastian).
It will be presented at @aamasconf.bsky.social later this month.
🎬 Short YouTube summary (5 minutes): www.youtube.com/watch?v=GCpf...
📝 Link to the paper on arXiv: arxiv.org/abs/2502.098...
09.05.2025 04:10 — 👍 4 🔁 1 💬 0 📌 0
View of an auditorium stage with slides showing in the centre and Wakamiya-san and a sign-language interpreter showing on the sides.
90-year-old Masako Wakamiya at the final keynote of #CHI2025 shared a cautiously optimistic vision of the future of AI and humanity, especially for the elderly, as we enter the age of 100-year-long lives. Her speech and work is truly inspiring.
01.05.2025 03:29 — 👍 5 🔁 0 💬 0 📌 0
White body towel in plastic packaging with black text
Oh yeah. In English, because apparently it sounds sophisticated, or at least that is what I have heard on the internet... So it must be true
30.04.2025 09:39 — 👍 1 🔁 0 💬 0 📌 0
One of the things I find most unique about Japan are the unnecessary and questionable motivational quotes on just about anything.
"Humans can only put out what has been put into them."
says my pre-packed body towel in the fanciest of fonts.
Inspiring stuff
30.04.2025 09:34 — 👍 4 🔁 0 💬 1 📌 0
A white maneki-neko plushie with a CHI2025 scarf looking extra cute
The #CHI2025 plushie is looking too cute:
29.04.2025 02:45 — 👍 2 🔁 0 💬 0 📌 0
White translucent badge tag on a wooden table that says rejected author
#CHI2025 has a badge tag for rejected author. 🥲I couldn't resist getting one for future use.
27.04.2025 01:32 — 👍 9 🔁 1 💬 0 📌 0
Our key takeaways are:
1. Designing causality for explanations from first principles is essential to fully understand what explanations to give to people about autonomous agents;
2. People prefer goal-oriented explanations for AVs, so focusing on those first might be beneficial.
🧵 7/7
24.04.2025 10:42 — 👍 0 🔁 0 💬 0 📌 0
We also find that counterfactual explanations were not as effective at calibrating trust, which suggests that in more complex domains, such as with AVs, focusing on goal-oriented explanations first might be more useful initially.
🧵 6/7
24.04.2025 10:42 — 👍 0 🔁 0 💬 1 📌 0
We find the best predictor of both perceived explanation quality and trust calibration is the degree of teleology in the explanations.
In other words, people seem to prefer explanations that are goal-oriented.
This supports the idea that they ascribe beliefs, desires, and intentions to AVs.
🧵 5/7
24.04.2025 10:42 — 👍 0 🔁 0 💬 1 📌 0
Committed to a welcoming, vibrant & flourishing #Haskell community!
Here we talk about community updates, software engineering and the joy of programming.
Find us on https://haskell.org and https://blog.haskell.org
Probabilistic ML researcher at Google Deepmind
Senior Lecturer in Sociolinguistics & Director of EDI for PPLS at University of Edinburgh.
he/him
LVC, digital (queer+youth+popular) cultures, 'MLE', & accent bias/linguistic discrimination - Edinburgh/London.
https://cilbury.wordpress.com/
computational cog sci • problem solving and social cognition • asst prof at NYU • https://codec-lab.github.io/
Promoting Cognitive Science as a discipline and fostering scientific interchange among researchers in various areas.
🌐 https://cognitivesciencesociety.org
Official Account for the Office of California Governor Gavin Newsom.
Building a #CaliforniaForAll.
🔗 linktr.ee/CAgovernor
posting about posts at new york magazine. have me on your podcast!
cognitive scientist. postdoc at center for humans and machines, MPI for human development, Berlin. Interested in moral psychology, human-AI interaction, (experimental) philosophy and other things.
neeleengelmann.com
CogSci, Philosophy & AI, Postdoc at Max Planck Institute Berlin.
Max-Planck-Institut für Bildungsforschung. Research institution dedicated to the study of human development. Located in Berlin. Belongs to the Max Planck Society. Tweets by the Press Office.
https://www.mpib-berlin.mpg.de/imprint
Director, Max Planck Center for Humans & Machines http://chm.mpib-berlin.mpg.de | Former prof. @MIT | Creator of http://moralmachine.net | Art: http://instagram.com/iyad.rahwan Web: rahwan.me
Year Progress Bot. Maintained by @haider.bsky.social
Buy me a Coffee: https://buymeacoffee.com/halipunjabi
At CMU, the (Blue)sky's the limit. United by curiosity and driven by passion, we reach across disciplines, forge new ground and deploy our expertise to make real change that benefits humankind.
Interdisciplinary centre at the University of Edinburgh. We study the cultural evolution of language using experimental and computational approaches. https://cle.ppls.ed.ac.uk/
PhD student in Reinforcement Learning at KTH Stockholm 🇸🇪
https://www.kth.se/profile/stesto
https://www.linkedin.com/in/stojanovic-stefan/
A LLN - large language Nathan - (RL, RLHF, society, robotics), athlete, yogi, chef
Writes http://interconnects.ai
At Ai2 via HuggingFace, Berkeley, and normal places
Reverse engineering neural networks at Anthropic. Previously Distill, OpenAI, Google Brain.Personal account.