Our paper "AI Alignment: The Case for Including Animals" with
@petersinger.info, Yip Fai Tse, and @ziesche.bsky.social , is out open access at Philosophy & Technology!
t.co/qBDOZU6ZZy
@adriamoret.bsky.social
Philosophy undergrad and Board Member, UPF-Centre for Animal Ethics. I conduct independent research on Animal Ethics, Well-being, Consciousness, AI welfare and AI safety See publications at: https://sites.google.com/view/adriamoret
Our paper "AI Alignment: The Case for Including Animals" with
@petersinger.info, Yip Fai Tse, and @ziesche.bsky.social , is out open access at Philosophy & Technology!
t.co/qBDOZU6ZZy
1/ My TEDx talk “What Do We Owe AI?” is now live! AI is advancing fast, and our relationships with AI systems are changing too. Some think AI could soon be sentient and deserve care. Are they right? What if the only honest answer is: maybe? 🧵+🔗👇
19.09.2025 17:31 — 👍 4 🔁 2 💬 1 📌 01/ Our paper "AI Alignment: The Case for Including Animals" with @petersinger.info, Yip Fai Tse and @ziesche.bsky.social,
has been accepted for publication at Philosophy & Technology!
We argue that frontier AIs should be aligned with basic concern for animal welfare and propose how🧵
10/ Here is the paper: philpapers.org/rec/TSEAAT
Feedback welcome!
9/ In the conclusion, we provide low-cost, realistic policy recommendations for AI companies and governments to ensure frontier AIs have some basic concern for the welfare of the vast majority of moral patients.
11.09.2025 16:38 — 👍 0 🔁 0 💬 1 📌 08/ The indirect approach is immediately implementable. AI companies could add simple principles like the following to the normative principles present in their alignment documents (i.e. ModelSpec, the Constitution...):
11.09.2025 16:38 — 👍 2 🔁 0 💬 1 📌 07/ We propose practical implementation through both direct methods (using animal welfare science, bioacoustics, neurotechnology) and indirect methods (adding basic animal welfare principles to existing alignment documents, i.e. ModelSpec, the Constitution).
11.09.2025 16:38 — 👍 0 🔁 0 💬 1 📌 06/ Our solution: "Alignment with a Basic Level of Animal Welfare" AI systems should at least minimize harm to animals when achievable at low cost, without requiring them to prioritize animals over humans or continuously engage in moralizing, preachy messaging about animal welfare.
11.09.2025 16:38 — 👍 2 🔁 0 💬 1 📌 05/ Long-term risks are even more concerning. If advanced AI systems lack basic consideration for animal welfare, they could lock in speciesist values for centuries, increasing the likelihood that animal suffering scales by orders of magnitude.
11.09.2025 16:38 — 👍 0 🔁 0 💬 1 📌 04/ This omission creates significant near-term risks: LLMs might entrench speciesist biases, AI-controlled vehicles might lead to increased animal deaths, and AI used to manage animals in factory farms could optimize for efficiency, increasing and prolonging the harms they suffer.
11.09.2025 16:38 — 👍 0 🔁 0 💬 1 📌 03/ Specifically, current alignment techniques (RLHF, Constitutional AI, deliberative alignment) explicitly focus on preventing harm to humans and even to property and environment, but extend no concern to animal welfare in their normative instructions (ModelSpec, the Constitution)
11.09.2025 16:38 — 👍 1 🔁 1 💬 1 📌 02/ We show that non-human animals—despite being 99.9% of sentient beings—are almost entirely excluded from AI alignment efforts and frameworks.
11.09.2025 16:38 — 👍 0 🔁 1 💬 1 📌 01/ Our paper "AI Alignment: The Case for Including Animals" with @petersinger.info, Yip Fai Tse and @ziesche.bsky.social,
has been accepted for publication at Philosophy & Technology!
We argue that frontier AIs should be aligned with basic concern for animal welfare and propose how🧵
Honored to be so well accompanied! Join us just before EAG NY!
05.09.2025 20:54 — 👍 0 🔁 0 💬 0 📌 0📣 𝗖𝗔𝗟𝗟 𝗙𝗢𝗥 𝗦𝗣𝗘𝗔𝗞𝗘𝗥𝗦: 𝗔𝗜, 𝗔𝗻𝗶𝗺𝗮𝗹𝘀, 𝗮𝗻𝗱 𝗗𝗶𝗴𝗶𝘁𝗮𝗹 𝗠𝗶𝗻𝗱𝘀 𝗡𝗬𝗖 𝟮𝟬𝟮𝟱 𝗟𝗶𝗴𝗵𝘁𝗻𝗶𝗻𝗴 𝗧𝗮𝗹𝗸𝘀
🗣️ 𝗦𝗽𝗲𝗮𝗸𝗲𝗿 𝗮𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀 𝗰𝗹𝗼𝘀𝗲 𝟭𝟱𝘁𝗵 𝗦𝗲𝗽𝘁𝗲𝗺𝗯𝗲𝗿 (early submissions by 8th September preferred)
🏢 Manhattan, New York City
📅 October 9-10, 2025
𝗟𝗶𝗺𝗶𝘁𝗲𝗱 𝘀𝗹𝗼𝘁𝘀 𝗮𝘃𝗮𝗶𝗹𝗮𝗯𝗹𝗲 - 𝗮𝗽𝗽𝗹𝘆 𝗵𝗲𝗿𝗲:
airtable.com/appMrThwr4p4...
𝗪𝗲 𝗮𝗿𝗲 𝗲𝘅𝗰𝗶𝘁𝗲𝗱 𝘁𝗼 𝗮𝗻𝗻𝗼𝘂𝗻𝗰𝗲 𝘁𝗵𝗮𝘁 𝘄𝗲 𝗮𝗿𝗲 𝗵𝗼𝘀𝘁𝗶𝗻𝗴 𝗼𝘂𝗿 𝘁𝗵𝗶𝗿𝗱 𝗲𝘃𝗲𝗻𝘁 𝗶𝗻 𝗡𝗲𝘄 𝗬𝗼𝗿𝗸 𝗖𝗶𝘁𝘆 𝗼𝗻 𝗢𝗰𝘁𝗼𝗯𝗲𝗿 𝟵𝘁𝗵 & 𝟭𝟬𝘁𝗵 🗽.
🦑 𝗔𝗜, 𝗔𝗻𝗶𝗺𝗮𝗹𝘀, 𝗮𝗻𝗱 𝗗𝗶𝗴𝗶𝘁𝗮𝗹 𝗠𝗶𝗻𝗱𝘀 𝗡𝗬𝗖 🦑
We are hosting this event right before EAG NYC.
👉 𝗥𝗲𝗴𝗶𝘀𝘁𝗲𝗿 𝘁𝗼𝗱𝗮𝘆 𝗮𝗻𝗱 𝗿𝗲𝗰𝗲𝗶𝘃𝗲 𝗮𝗻 𝗲𝗮𝗿𝗹𝘆 𝗯𝗶𝗿𝗱 𝗱𝗶𝘀𝗰𝗼𝘂𝗻𝘁: www.zeffy.com/en-US/ticket...
Feel free to share this short guide that others and I developed for anyone who has interacted with an AI that seemed conscious — or simply wondered if they could be. whenaiseemsconscious.org
23.07.2025 10:03 — 👍 3 🔁 1 💬 0 📌 0🎥 Excited to share that the recording of my presentation "AI Welfare Risks" from the AIADM London 2025 Conference is now live!
I make the case for near-term AI welfare and propose 4 concrete policies for leading AI companies to reduce welfare risks!
www.youtube.com/watch?v=R6w4...
Participaré en las jornadas "La IA y las fronteras de la consideración moral" | 26-27 sept | USC, Santiago
¿Pueden los sistemas de IA ser moralmente considerables? 🤖 ¿Cómo deben impactar a los animales? 🐦
¡Se aceptan propuestas de presentaciones!
sites.google.com/view/iaycons...
My paper "AI Welfare Risks" is now available open access at Philosophical Studies!
I argue that AI development and safety efforts pose near-term risks of harming AI systems themselves, and propose tentative policies leading AI labs could implement to reduce them
link.springer.com/article/10.1....
My paper "AI Welfare Risks" is now available open access at Philosophical Studies!
I argue that AI development and safety efforts pose near-term risks of harming AI systems themselves, and propose tentative policies leading AI labs could implement to reduce them
link.springer.com/article/10.1....
what is art for?
my girlfriend wrote about it in her new Substack!
open.substack.com/pub/nadiaend...
what is art for?
my girlfriend wrote about it in her new Substack!
open.substack.com/pub/nadiaend...
still maximizing happiness, all these years later 🙏
30.05.2025 15:36 — 👍 5 🔁 1 💬 0 📌 0𝗘𝘅𝗰𝗶𝘁𝗲𝗱 𝘁𝗼 𝗶𝗻𝘁𝗿𝗼𝗱𝘂𝗰𝗲 Adrià Moret 𝗮𝘀 𝗮 𝘀𝗽𝗲𝗮𝗸𝗲𝗿 𝗮𝘁 𝗔𝗜, 𝗔𝗻𝗶𝗺𝗮𝗹𝘀, & 𝗗𝗶𝗴𝗶𝘁𝗮𝗹 𝗠𝗶𝗻𝗱𝘀 𝗟𝗼𝗻𝗱𝗼𝗻 𝟮𝟬𝟮𝟱! 🔍
His philosophical writings bring valuable perspectives to #𝗔𝗜𝗔𝗗𝗠𝟮𝟱 on the intersection of digital minds, animal ethics, and ways to reduce risks to AI welfare.
🎟️ Register today! Link in bio.
What is (potentially) the best neglected and low-cost intervention to reduce future risks of harm to animals and digital minds?
Join me on this Rethink Priorities webinar (May 28th) to find out!
See lu.ma/gy3l2qek for further details.
I have a new website (sites.google.com/view/adria-m...), where I'll compile my papers (published and under review) and my upcoming presentations.
03.05.2025 16:05 — 👍 3 🔁 0 💬 0 📌 0My paper "AI Welfare Risks" has been accepted for publication at Philosophical Studies!
I argue that near-future AIs may have welfare, that RL and behaviour restrictions could harm them, that this poses a tension with AI safety and how AI labs could reduce such welfare risks. 1/
What is (potentially) the best neglected and low-cost intervention to reduce future risks of harm to animals and digital minds?
Join me on this Rethink Priorities webinar (May 28th) to find out!
See lu.ma/gy3l2qek for further details.
Here's the paper: philpapers.org/rec/MORAWR
For useful feedback, thanks to
@jeffsebo.bsky.social, @elliottthornley.bsky.social, @pmagana94.bsky.social and others. May also be of interest to @eze-pz.bsky.social, @eschwitz.bsky.social, @birchlse.bsky.social, @petersinger.info, @jacyanthis.bsky.social