And see our original 2025 article "AI Alignment: The Case for Including Animals" here: link.springer.com/article/10.1...
13.02.2026 10:48 — 👍 0 🔁 0 💬 0 📌 0@adriamoret.bsky.social
Philosophy undergrad and Board Member, UPF-Centre for Animal Ethics. I conduct independent research on Animal Ethics, Well-being, Consciousness, AI welfare and AI safety See publications at: https://sites.google.com/view/adriamoret
And see our original 2025 article "AI Alignment: The Case for Including Animals" here: link.springer.com/article/10.1...
13.02.2026 10:48 — 👍 0 🔁 0 💬 0 📌 0See the full response here: link.springer.com/article/10.1...
13.02.2026 10:48 — 👍 0 🔁 0 💬 1 📌 0Our response to Coglan & Parker's commentary on our 2025 paper with @petersinger.info, Yip Fai Tse, & @ziesche.bsky.social is out in P&T!
We argue adequate consideration of animals' interests requires advancing beyond basic AI-animal alignment as it becomes feasible and desirable.
Link below.
Great to see Anthropic's constitution include concern for the welfare of animals, Claude, and other AI systems.
The document is also a great example of the important role philosophy should play in AI alignment.
Hopefully, other leading AI companies follow suit.
www.anthropic.com/constitution
Check out this great paper on what political liberalism should look like when we take seriously the interests and claims of all sentient beings!
20.01.2026 11:12 — 👍 1 🔁 0 💬 0 📌 0Great opportunity to publish in the Journal of Ethics special issue on AI & animals, co-edited by Catia Faria and Yip Fai Tse!
Topics: AI ethics & nonhumans, AI's impact on animals, future perspectives on AI & animals
Send a 500-word abstract by Dec 22!
link.springer.com/collections/...
Our paper "AI Alignment: The Case for Including Animals" with
@petersinger.info, Yip Fai Tse, and @ziesche.bsky.social , is out open access at Philosophy & Technology!
t.co/qBDOZU6ZZy
1/ My TEDx talk “What Do We Owe AI?” is now live! AI is advancing fast, and our relationships with AI systems are changing too. Some think AI could soon be sentient and deserve care. Are they right? What if the only honest answer is: maybe? 🧵+🔗👇
19.09.2025 17:31 — 👍 4 🔁 2 💬 1 📌 01/ Our paper "AI Alignment: The Case for Including Animals" with @petersinger.info, Yip Fai Tse and @ziesche.bsky.social,
has been accepted for publication at Philosophy & Technology!
We argue that frontier AIs should be aligned with basic concern for animal welfare and propose how🧵
10/ Here is the paper: philpapers.org/rec/TSEAAT
Feedback welcome!
9/ In the conclusion, we provide low-cost, realistic policy recommendations for AI companies and governments to ensure frontier AIs have some basic concern for the welfare of the vast majority of moral patients.
11.09.2025 16:38 — 👍 0 🔁 0 💬 1 📌 08/ The indirect approach is immediately implementable. AI companies could add simple principles like the following to the normative principles present in their alignment documents (i.e. ModelSpec, the Constitution...):
11.09.2025 16:38 — 👍 2 🔁 0 💬 1 📌 07/ We propose practical implementation through both direct methods (using animal welfare science, bioacoustics, neurotechnology) and indirect methods (adding basic animal welfare principles to existing alignment documents, i.e. ModelSpec, the Constitution).
11.09.2025 16:38 — 👍 0 🔁 0 💬 1 📌 06/ Our solution: "Alignment with a Basic Level of Animal Welfare" AI systems should at least minimize harm to animals when achievable at low cost, without requiring them to prioritize animals over humans or continuously engage in moralizing, preachy messaging about animal welfare.
11.09.2025 16:38 — 👍 2 🔁 0 💬 1 📌 05/ Long-term risks are even more concerning. If advanced AI systems lack basic consideration for animal welfare, they could lock in speciesist values for centuries, increasing the likelihood that animal suffering scales by orders of magnitude.
11.09.2025 16:38 — 👍 0 🔁 0 💬 1 📌 04/ This omission creates significant near-term risks: LLMs might entrench speciesist biases, AI-controlled vehicles might lead to increased animal deaths, and AI used to manage animals in factory farms could optimize for efficiency, increasing and prolonging the harms they suffer.
11.09.2025 16:38 — 👍 0 🔁 0 💬 1 📌 03/ Specifically, current alignment techniques (RLHF, Constitutional AI, deliberative alignment) explicitly focus on preventing harm to humans and even to property and environment, but extend no concern to animal welfare in their normative instructions (ModelSpec, the Constitution)
11.09.2025 16:38 — 👍 1 🔁 1 💬 1 📌 02/ We show that non-human animals—despite being 99.9% of sentient beings—are almost entirely excluded from AI alignment efforts and frameworks.
11.09.2025 16:38 — 👍 0 🔁 1 💬 1 📌 01/ Our paper "AI Alignment: The Case for Including Animals" with @petersinger.info, Yip Fai Tse and @ziesche.bsky.social,
has been accepted for publication at Philosophy & Technology!
We argue that frontier AIs should be aligned with basic concern for animal welfare and propose how🧵
Honored to be so well accompanied! Join us just before EAG NY!
05.09.2025 20:54 — 👍 0 🔁 0 💬 0 📌 0📣 𝗖𝗔𝗟𝗟 𝗙𝗢𝗥 𝗦𝗣𝗘𝗔𝗞𝗘𝗥𝗦: 𝗔𝗜, 𝗔𝗻𝗶𝗺𝗮𝗹𝘀, 𝗮𝗻𝗱 𝗗𝗶𝗴𝗶𝘁𝗮𝗹 𝗠𝗶𝗻𝗱𝘀 𝗡𝗬𝗖 𝟮𝟬𝟮𝟱 𝗟𝗶𝗴𝗵𝘁𝗻𝗶𝗻𝗴 𝗧𝗮𝗹𝗸𝘀
🗣️ 𝗦𝗽𝗲𝗮𝗸𝗲𝗿 𝗮𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀 𝗰𝗹𝗼𝘀𝗲 𝟭𝟱𝘁𝗵 𝗦𝗲𝗽𝘁𝗲𝗺𝗯𝗲𝗿 (early submissions by 8th September preferred)
🏢 Manhattan, New York City
📅 October 9-10, 2025
𝗟𝗶𝗺𝗶𝘁𝗲𝗱 𝘀𝗹𝗼𝘁𝘀 𝗮𝘃𝗮𝗶𝗹𝗮𝗯𝗹𝗲 - 𝗮𝗽𝗽𝗹𝘆 𝗵𝗲𝗿𝗲:
airtable.com/appMrThwr4p4...
𝗪𝗲 𝗮𝗿𝗲 𝗲𝘅𝗰𝗶𝘁𝗲𝗱 𝘁𝗼 𝗮𝗻𝗻𝗼𝘂𝗻𝗰𝗲 𝘁𝗵𝗮𝘁 𝘄𝗲 𝗮𝗿𝗲 𝗵𝗼𝘀𝘁𝗶𝗻𝗴 𝗼𝘂𝗿 𝘁𝗵𝗶𝗿𝗱 𝗲𝘃𝗲𝗻𝘁 𝗶𝗻 𝗡𝗲𝘄 𝗬𝗼𝗿𝗸 𝗖𝗶𝘁𝘆 𝗼𝗻 𝗢𝗰𝘁𝗼𝗯𝗲𝗿 𝟵𝘁𝗵 & 𝟭𝟬𝘁𝗵 🗽.
🦑 𝗔𝗜, 𝗔𝗻𝗶𝗺𝗮𝗹𝘀, 𝗮𝗻𝗱 𝗗𝗶𝗴𝗶𝘁𝗮𝗹 𝗠𝗶𝗻𝗱𝘀 𝗡𝗬𝗖 🦑
We are hosting this event right before EAG NYC.
👉 𝗥𝗲𝗴𝗶𝘀𝘁𝗲𝗿 𝘁𝗼𝗱𝗮𝘆 𝗮𝗻𝗱 𝗿𝗲𝗰𝗲𝗶𝘃𝗲 𝗮𝗻 𝗲𝗮𝗿𝗹𝘆 𝗯𝗶𝗿𝗱 𝗱𝗶𝘀𝗰𝗼𝘂𝗻𝘁: www.zeffy.com/en-US/ticket...
Feel free to share this short guide that others and I developed for anyone who has interacted with an AI that seemed conscious — or simply wondered if they could be. whenaiseemsconscious.org
23.07.2025 10:03 — 👍 3 🔁 1 💬 0 📌 0🎥 Excited to share that the recording of my presentation "AI Welfare Risks" from the AIADM London 2025 Conference is now live!
I make the case for near-term AI welfare and propose 4 concrete policies for leading AI companies to reduce welfare risks!
www.youtube.com/watch?v=R6w4...
Participaré en las jornadas "La IA y las fronteras de la consideración moral" | 26-27 sept | USC, Santiago
¿Pueden los sistemas de IA ser moralmente considerables? 🤖 ¿Cómo deben impactar a los animales? 🐦
¡Se aceptan propuestas de presentaciones!
sites.google.com/view/iaycons...
My paper "AI Welfare Risks" is now available open access at Philosophical Studies!
I argue that AI development and safety efforts pose near-term risks of harming AI systems themselves, and propose tentative policies leading AI labs could implement to reduce them
link.springer.com/article/10.1....
My paper "AI Welfare Risks" is now available open access at Philosophical Studies!
I argue that AI development and safety efforts pose near-term risks of harming AI systems themselves, and propose tentative policies leading AI labs could implement to reduce them
link.springer.com/article/10.1....
what is art for?
my girlfriend wrote about it in her new Substack!
open.substack.com/pub/nadiaend...
what is art for?
my girlfriend wrote about it in her new Substack!
open.substack.com/pub/nadiaend...
still maximizing happiness, all these years later 🙏
30.05.2025 15:36 — 👍 5 🔁 1 💬 0 📌 0