Adrià Moret's Avatar

Adrià Moret

@adriamoret.bsky.social

Philosophy undergrad and Board Member, UPF-Centre for Animal Ethics. I conduct independent research on Animal Ethics, Well-being, Consciousness, AI welfare and AI safety See publications at: https://sites.google.com/view/adriamoret

41 Followers  |  43 Following  |  40 Posts  |  Joined: 30.11.2024  |  1.8826

Latest posts by adriamoret.bsky.social on Bluesky

Preview
AI Alignment: The Case for Including Animals - Philosophy & Technology AI alignment efforts and proposals try to make AI systems ethical, safe and beneficial for humans by making them follow human intentions, preferences or values. However, these proposals largely disreg...

And see our original 2025 article "AI Alignment: The Case for Including Animals" here: link.springer.com/article/10.1...

13.02.2026 10:48 — 👍 0    🔁 0    💬 0    📌 0
Preview
Navigating AI-Animal Alignment: A Reply to Coghlan and Parker - Philosophy & Technology This commentary responds to Coghlan and Parker's commentary on our paper "AI Alignment: The Case for Including Animals" (2025). We clarify that our emphasis on "basic" alignment with animal welfare in...

See the full response here: link.springer.com/article/10.1...

13.02.2026 10:48 — 👍 0    🔁 0    💬 1    📌 0
Post image

Our response to Coglan & Parker's commentary on our 2025 paper with @petersinger.info, Yip Fai Tse, & @ziesche.bsky.social is out in P&T!

We argue adequate consideration of animals' interests requires advancing beyond basic AI-animal alignment as it becomes feasible and desirable.

Link below.

13.02.2026 10:48 — 👍 1    🔁 1    💬 1    📌 0
Post image

Great to see Anthropic's constitution include concern for the welfare of animals, Claude, and other AI systems.

The document is also a great example of the important role philosophy should play in AI alignment.

Hopefully, other leading AI companies follow suit.

www.anthropic.com/constitution

22.01.2026 01:14 — 👍 2    🔁 1    💬 0    📌 0

Check out this great paper on what political liberalism should look like when we take seriously the interests and claims of all sentient beings!

20.01.2026 11:12 — 👍 1    🔁 0    💬 0    📌 0
Post image

Great opportunity to publish in the Journal of Ethics special issue on AI & animals, co-edited by Catia Faria and Yip Fai Tse!

Topics: AI ethics & nonhumans, AI's impact on animals, future perspectives on AI & animals

Send a 500-word abstract by Dec 22!
link.springer.com/collections/...

20.11.2025 10:23 — 👍 2    🔁 0    💬 0    📌 0
https://link.springer.com/article/10.1007/s13347-025-00979-1

Our paper "AI Alignment: The Case for Including Animals" with
@petersinger.info, Yip Fai Tse, and @ziesche.bsky.social , is out open access at Philosophy & Technology!

t.co/qBDOZU6ZZy

14.10.2025 13:19 — 👍 2    🔁 1    💬 0    📌 0
Post image

1/ My TEDx talk “What Do We Owe AI?” is now live! AI is advancing fast, and our relationships with AI systems are changing too. Some think AI could soon be sentient and deserve care. Are they right? What if the only honest answer is: maybe? 🧵+🔗👇

19.09.2025 17:31 — 👍 4    🔁 2    💬 1    📌 0
Post image

1/ Our paper "AI Alignment: The Case for Including Animals" with @petersinger.info, Yip Fai Tse and @ziesche.bsky.social,
has been accepted for publication at Philosophy & Technology!

We argue that frontier AIs should be aligned with basic concern for animal welfare and propose how🧵

11.09.2025 16:38 — 👍 6    🔁 1    💬 1    📌 0
Preview
Yip Fai Tse, Adrià Moret, Soenke Ziesche & Peter Singer, AI Alignment: The Case for Including Animals - PhilPapers AI alignment efforts and proposals try to make AI systems ethical, safe and beneficial for humans by making them follow human intentions, preferences or values. However, these proposals largely disreg...

10/ Here is the paper: philpapers.org/rec/TSEAAT

Feedback welcome!

11.09.2025 16:38 — 👍 0    🔁 0    💬 0    📌 0

9/ In the conclusion, we provide low-cost, realistic policy recommendations for AI companies and governments to ensure frontier AIs have some basic concern for the welfare of the vast majority of moral patients.

11.09.2025 16:38 — 👍 0    🔁 0    💬 1    📌 0
Post image

8/ The indirect approach is immediately implementable. AI companies could add simple principles like the following to the normative principles present in their alignment documents (i.e. ModelSpec, the Constitution...):

11.09.2025 16:38 — 👍 2    🔁 0    💬 1    📌 0

7/ We propose practical implementation through both direct methods (using animal welfare science, bioacoustics, neurotechnology) and indirect methods (adding basic animal welfare principles to existing alignment documents, i.e. ModelSpec, the Constitution).

11.09.2025 16:38 — 👍 0    🔁 0    💬 1    📌 0

6/ Our solution: "Alignment with a Basic Level of Animal Welfare" AI systems should at least minimize harm to animals when achievable at low cost, without requiring them to prioritize animals over humans or continuously engage in moralizing, preachy messaging about animal welfare.

11.09.2025 16:38 — 👍 2    🔁 0    💬 1    📌 0

5/ Long-term risks are even more concerning. If advanced AI systems lack basic consideration for animal welfare, they could lock in speciesist values for centuries, increasing the likelihood that animal suffering scales by orders of magnitude.

11.09.2025 16:38 — 👍 0    🔁 0    💬 1    📌 0

4/ This omission creates significant near-term risks: LLMs might entrench speciesist biases, AI-controlled vehicles might lead to increased animal deaths, and AI used to manage animals in factory farms could optimize for efficiency, increasing and prolonging the harms they suffer.

11.09.2025 16:38 — 👍 0    🔁 0    💬 1    📌 0

3/ Specifically, current alignment techniques (RLHF, Constitutional AI, deliberative alignment) explicitly focus on preventing harm to humans and even to property and environment, but extend no concern to animal welfare in their normative instructions (ModelSpec, the Constitution)

11.09.2025 16:38 — 👍 1    🔁 1    💬 1    📌 0

2/ We show that non-human animals—despite being 99.9% of sentient beings—are almost entirely excluded from AI alignment efforts and frameworks.

11.09.2025 16:38 — 👍 0    🔁 1    💬 1    📌 0
Post image

1/ Our paper "AI Alignment: The Case for Including Animals" with @petersinger.info, Yip Fai Tse and @ziesche.bsky.social,
has been accepted for publication at Philosophy & Technology!

We argue that frontier AIs should be aligned with basic concern for animal welfare and propose how🧵

11.09.2025 16:38 — 👍 6    🔁 1    💬 1    📌 0

Honored to be so well accompanied! Join us just before EAG NY!

05.09.2025 20:54 — 👍 0    🔁 0    💬 0    📌 0
Post image

📣 𝗖𝗔𝗟𝗟 𝗙𝗢𝗥 𝗦𝗣𝗘𝗔𝗞𝗘𝗥𝗦: 𝗔𝗜, 𝗔𝗻𝗶𝗺𝗮𝗹𝘀, 𝗮𝗻𝗱 𝗗𝗶𝗴𝗶𝘁𝗮𝗹 𝗠𝗶𝗻𝗱𝘀 𝗡𝗬𝗖 𝟮𝟬𝟮𝟱 𝗟𝗶𝗴𝗵𝘁𝗻𝗶𝗻𝗴 𝗧𝗮𝗹𝗸𝘀

🗣️ 𝗦𝗽𝗲𝗮𝗸𝗲𝗿 𝗮𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀 𝗰𝗹𝗼𝘀𝗲 𝟭𝟱𝘁𝗵 𝗦𝗲𝗽𝘁𝗲𝗺𝗯𝗲𝗿 (early submissions by 8th September preferred)
🏢 Manhattan, New York City
📅 October 9-10, 2025

𝗟𝗶𝗺𝗶𝘁𝗲𝗱 𝘀𝗹𝗼𝘁𝘀 𝗮𝘃𝗮𝗶𝗹𝗮𝗯𝗹𝗲 - 𝗮𝗽𝗽𝗹𝘆 𝗵𝗲𝗿𝗲:
airtable.com/appMrThwr4p4...

29.08.2025 06:45 — 👍 1    🔁 1    💬 0    📌 0
Post image

𝗪𝗲 𝗮𝗿𝗲 𝗲𝘅𝗰𝗶𝘁𝗲𝗱 𝘁𝗼 𝗮𝗻𝗻𝗼𝘂𝗻𝗰𝗲 𝘁𝗵𝗮𝘁 𝘄𝗲 𝗮𝗿𝗲 𝗵𝗼𝘀𝘁𝗶𝗻𝗴 𝗼𝘂𝗿 𝘁𝗵𝗶𝗿𝗱 𝗲𝘃𝗲𝗻𝘁 𝗶𝗻 𝗡𝗲𝘄 𝗬𝗼𝗿𝗸 𝗖𝗶𝘁𝘆 𝗼𝗻 𝗢𝗰𝘁𝗼𝗯𝗲𝗿 𝟵𝘁𝗵 & 𝟭𝟬𝘁𝗵 🗽.
🦑 𝗔𝗜, 𝗔𝗻𝗶𝗺𝗮𝗹𝘀, 𝗮𝗻𝗱 𝗗𝗶𝗴𝗶𝘁𝗮𝗹 𝗠𝗶𝗻𝗱𝘀 𝗡𝗬𝗖 🦑

We are hosting this event right before EAG NYC.

👉 𝗥𝗲𝗴𝗶𝘀𝘁𝗲𝗿 𝘁𝗼𝗱𝗮𝘆 𝗮𝗻𝗱 𝗿𝗲𝗰𝗲𝗶𝘃𝗲 𝗮𝗻 𝗲𝗮𝗿𝗹𝘆 𝗯𝗶𝗿𝗱 𝗱𝗶𝘀𝗰𝗼𝘂𝗻𝘁: www.zeffy.com/en-US/ticket...

13.08.2025 16:06 — 👍 3    🔁 1    💬 1    📌 0
Post image

Feel free to share this short guide that others and I developed for anyone who has interacted with an AI that seemed conscious — or simply wondered if they could be. whenaiseemsconscious.org

23.07.2025 10:03 — 👍 3    🔁 1    💬 0    📌 0
AI Welfare Risks: Four Tentative AI Welfare Policies | Adrià Moret | AIADM London 2025
YouTube video by AI for Animals AI Welfare Risks: Four Tentative AI Welfare Policies | Adrià Moret | AIADM London 2025

🎥 Excited to share that the recording of my presentation "AI Welfare Risks" from the AIADM London 2025 Conference is now live!

I make the case for near-term AI welfare and propose 4 concrete policies for leading AI companies to reduce welfare risks!

www.youtube.com/watch?v=R6w4...

23.07.2025 10:02 — 👍 2    🔁 1    💬 0    📌 0
Post image

Participaré en las jornadas "La IA y las fronteras de la consideración moral" | 26-27 sept | USC, Santiago

¿Pueden los sistemas de IA ser moralmente considerables? 🤖 ¿Cómo deben impactar a los animales? 🐦

¡Se aceptan propuestas de presentaciones!

sites.google.com/view/iaycons...

14.07.2025 10:31 — 👍 1    🔁 0    💬 0    📌 0
Preview
AI welfare risks - Philosophical Studies In the coming years or decades, as frontier AI systems become more capable and agentic, it is increasingly likely that they meet the sufficient conditions to be welfare subjects under the three major ...

My paper "AI Welfare Risks" is now available open access at Philosophical Studies!

I argue that AI development and safety efforts pose near-term risks of harming AI systems themselves, and propose tentative policies leading AI labs could implement to reduce them

link.springer.com/article/10.1....

09.06.2025 09:02 — 👍 1    🔁 1    💬 0    📌 0
Preview
AI welfare risks - Philosophical Studies In the coming years or decades, as frontier AI systems become more capable and agentic, it is increasingly likely that they meet the sufficient conditions to be welfare subjects under the three major ...

My paper "AI Welfare Risks" is now available open access at Philosophical Studies!

I argue that AI development and safety efforts pose near-term risks of harming AI systems themselves, and propose tentative policies leading AI labs could implement to reduce them

link.springer.com/article/10.1....

09.06.2025 09:02 — 👍 1    🔁 1    💬 0    📌 0
Preview
The useful uselessness of art Reflections upon the discovery of the concept of "art therapy"

what is art for?
my girlfriend wrote about it in her new Substack!

open.substack.com/pub/nadiaend...

06.06.2025 20:42 — 👍 2    🔁 1    💬 0    📌 0
Preview
The useful uselessness of art Reflections upon the discovery of the concept of "art therapy"

what is art for?
my girlfriend wrote about it in her new Substack!

open.substack.com/pub/nadiaend...

06.06.2025 20:42 — 👍 2    🔁 1    💬 0    📌 0
Post image

still maximizing happiness, all these years later 🙏

30.05.2025 15:36 — 👍 5    🔁 1    💬 0    📌 0

@adriamoret is following 19 prominent accounts