Adrià Moret's Avatar

Adrià Moret

@adriamoret.bsky.social

Philosophy undergrad and Board Member, UPF-Centre for Animal Ethics. I conduct independent research on Animal Ethics, Well-being, Consciousness, AI welfare and AI safety See publications at: https://sites.google.com/view/adriamoret

37 Followers  |  42 Following  |  34 Posts  |  Joined: 30.11.2024  |  1.7269

Latest posts by adriamoret.bsky.social on Bluesky

https://link.springer.com/article/10.1007/s13347-025-00979-1

Our paper "AI Alignment: The Case for Including Animals" with
@petersinger.info, Yip Fai Tse, and @ziesche.bsky.social , is out open access at Philosophy & Technology!

t.co/qBDOZU6ZZy

14.10.2025 13:19 — 👍 2    🔁 1    💬 0    📌 0
Post image

1/ My TEDx talk “What Do We Owe AI?” is now live! AI is advancing fast, and our relationships with AI systems are changing too. Some think AI could soon be sentient and deserve care. Are they right? What if the only honest answer is: maybe? 🧵+🔗👇

19.09.2025 17:31 — 👍 4    🔁 2    💬 1    📌 0
Post image

1/ Our paper "AI Alignment: The Case for Including Animals" with @petersinger.info, Yip Fai Tse and @ziesche.bsky.social,
has been accepted for publication at Philosophy & Technology!

We argue that frontier AIs should be aligned with basic concern for animal welfare and propose how🧵

11.09.2025 16:38 — 👍 5    🔁 1    💬 1    📌 0
Preview
Yip Fai Tse, Adrià Moret, Soenke Ziesche & Peter Singer, AI Alignment: The Case for Including Animals - PhilPapers AI alignment efforts and proposals try to make AI systems ethical, safe and beneficial for humans by making them follow human intentions, preferences or values. However, these proposals largely disreg...

10/ Here is the paper: philpapers.org/rec/TSEAAT

Feedback welcome!

11.09.2025 16:38 — 👍 0    🔁 0    💬 0    📌 0

9/ In the conclusion, we provide low-cost, realistic policy recommendations for AI companies and governments to ensure frontier AIs have some basic concern for the welfare of the vast majority of moral patients.

11.09.2025 16:38 — 👍 0    🔁 0    💬 1    📌 0
Post image

8/ The indirect approach is immediately implementable. AI companies could add simple principles like the following to the normative principles present in their alignment documents (i.e. ModelSpec, the Constitution...):

11.09.2025 16:38 — 👍 2    🔁 0    💬 1    📌 0

7/ We propose practical implementation through both direct methods (using animal welfare science, bioacoustics, neurotechnology) and indirect methods (adding basic animal welfare principles to existing alignment documents, i.e. ModelSpec, the Constitution).

11.09.2025 16:38 — 👍 0    🔁 0    💬 1    📌 0

6/ Our solution: "Alignment with a Basic Level of Animal Welfare" AI systems should at least minimize harm to animals when achievable at low cost, without requiring them to prioritize animals over humans or continuously engage in moralizing, preachy messaging about animal welfare.

11.09.2025 16:38 — 👍 2    🔁 0    💬 1    📌 0

5/ Long-term risks are even more concerning. If advanced AI systems lack basic consideration for animal welfare, they could lock in speciesist values for centuries, increasing the likelihood that animal suffering scales by orders of magnitude.

11.09.2025 16:38 — 👍 0    🔁 0    💬 1    📌 0

4/ This omission creates significant near-term risks: LLMs might entrench speciesist biases, AI-controlled vehicles might lead to increased animal deaths, and AI used to manage animals in factory farms could optimize for efficiency, increasing and prolonging the harms they suffer.

11.09.2025 16:38 — 👍 0    🔁 0    💬 1    📌 0

3/ Specifically, current alignment techniques (RLHF, Constitutional AI, deliberative alignment) explicitly focus on preventing harm to humans and even to property and environment, but extend no concern to animal welfare in their normative instructions (ModelSpec, the Constitution)

11.09.2025 16:38 — 👍 1    🔁 1    💬 1    📌 0

2/ We show that non-human animals—despite being 99.9% of sentient beings—are almost entirely excluded from AI alignment efforts and frameworks.

11.09.2025 16:38 — 👍 0    🔁 1    💬 1    📌 0
Post image

1/ Our paper "AI Alignment: The Case for Including Animals" with @petersinger.info, Yip Fai Tse and @ziesche.bsky.social,
has been accepted for publication at Philosophy & Technology!

We argue that frontier AIs should be aligned with basic concern for animal welfare and propose how🧵

11.09.2025 16:38 — 👍 5    🔁 1    💬 1    📌 0

Honored to be so well accompanied! Join us just before EAG NY!

05.09.2025 20:54 — 👍 0    🔁 0    💬 0    📌 0
Post image

📣 𝗖𝗔𝗟𝗟 𝗙𝗢𝗥 𝗦𝗣𝗘𝗔𝗞𝗘𝗥𝗦: 𝗔𝗜, 𝗔𝗻𝗶𝗺𝗮𝗹𝘀, 𝗮𝗻𝗱 𝗗𝗶𝗴𝗶𝘁𝗮𝗹 𝗠𝗶𝗻𝗱𝘀 𝗡𝗬𝗖 𝟮𝟬𝟮𝟱 𝗟𝗶𝗴𝗵𝘁𝗻𝗶𝗻𝗴 𝗧𝗮𝗹𝗸𝘀

🗣️ 𝗦𝗽𝗲𝗮𝗸𝗲𝗿 𝗮𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀 𝗰𝗹𝗼𝘀𝗲 𝟭𝟱𝘁𝗵 𝗦𝗲𝗽𝘁𝗲𝗺𝗯𝗲𝗿 (early submissions by 8th September preferred)
🏢 Manhattan, New York City
📅 October 9-10, 2025

𝗟𝗶𝗺𝗶𝘁𝗲𝗱 𝘀𝗹𝗼𝘁𝘀 𝗮𝘃𝗮𝗶𝗹𝗮𝗯𝗹𝗲 - 𝗮𝗽𝗽𝗹𝘆 𝗵𝗲𝗿𝗲:
airtable.com/appMrThwr4p4...

29.08.2025 06:45 — 👍 1    🔁 1    💬 0    📌 0
Post image

𝗪𝗲 𝗮𝗿𝗲 𝗲𝘅𝗰𝗶𝘁𝗲𝗱 𝘁𝗼 𝗮𝗻𝗻𝗼𝘂𝗻𝗰𝗲 𝘁𝗵𝗮𝘁 𝘄𝗲 𝗮𝗿𝗲 𝗵𝗼𝘀𝘁𝗶𝗻𝗴 𝗼𝘂𝗿 𝘁𝗵𝗶𝗿𝗱 𝗲𝘃𝗲𝗻𝘁 𝗶𝗻 𝗡𝗲𝘄 𝗬𝗼𝗿𝗸 𝗖𝗶𝘁𝘆 𝗼𝗻 𝗢𝗰𝘁𝗼𝗯𝗲𝗿 𝟵𝘁𝗵 & 𝟭𝟬𝘁𝗵 🗽.
🦑 𝗔𝗜, 𝗔𝗻𝗶𝗺𝗮𝗹𝘀, 𝗮𝗻𝗱 𝗗𝗶𝗴𝗶𝘁𝗮𝗹 𝗠𝗶𝗻𝗱𝘀 𝗡𝗬𝗖 🦑

We are hosting this event right before EAG NYC.

👉 𝗥𝗲𝗴𝗶𝘀𝘁𝗲𝗿 𝘁𝗼𝗱𝗮𝘆 𝗮𝗻𝗱 𝗿𝗲𝗰𝗲𝗶𝘃𝗲 𝗮𝗻 𝗲𝗮𝗿𝗹𝘆 𝗯𝗶𝗿𝗱 𝗱𝗶𝘀𝗰𝗼𝘂𝗻𝘁: www.zeffy.com/en-US/ticket...

13.08.2025 16:06 — 👍 3    🔁 1    💬 1    📌 0
Post image

Feel free to share this short guide that others and I developed for anyone who has interacted with an AI that seemed conscious — or simply wondered if they could be. whenaiseemsconscious.org

23.07.2025 10:03 — 👍 3    🔁 1    💬 0    📌 0
AI Welfare Risks: Four Tentative AI Welfare Policies | Adrià Moret | AIADM London 2025
YouTube video by AI for Animals AI Welfare Risks: Four Tentative AI Welfare Policies | Adrià Moret | AIADM London 2025

🎥 Excited to share that the recording of my presentation "AI Welfare Risks" from the AIADM London 2025 Conference is now live!

I make the case for near-term AI welfare and propose 4 concrete policies for leading AI companies to reduce welfare risks!

www.youtube.com/watch?v=R6w4...

23.07.2025 10:02 — 👍 2    🔁 1    💬 0    📌 0
Post image

Participaré en las jornadas "La IA y las fronteras de la consideración moral" | 26-27 sept | USC, Santiago

¿Pueden los sistemas de IA ser moralmente considerables? 🤖 ¿Cómo deben impactar a los animales? 🐦

¡Se aceptan propuestas de presentaciones!

sites.google.com/view/iaycons...

14.07.2025 10:31 — 👍 1    🔁 0    💬 0    📌 0
Preview
AI welfare risks - Philosophical Studies In the coming years or decades, as frontier AI systems become more capable and agentic, it is increasingly likely that they meet the sufficient conditions to be welfare subjects under the three major ...

My paper "AI Welfare Risks" is now available open access at Philosophical Studies!

I argue that AI development and safety efforts pose near-term risks of harming AI systems themselves, and propose tentative policies leading AI labs could implement to reduce them

link.springer.com/article/10.1....

09.06.2025 09:02 — 👍 1    🔁 1    💬 0    📌 0
Preview
AI welfare risks - Philosophical Studies In the coming years or decades, as frontier AI systems become more capable and agentic, it is increasingly likely that they meet the sufficient conditions to be welfare subjects under the three major ...

My paper "AI Welfare Risks" is now available open access at Philosophical Studies!

I argue that AI development and safety efforts pose near-term risks of harming AI systems themselves, and propose tentative policies leading AI labs could implement to reduce them

link.springer.com/article/10.1....

09.06.2025 09:02 — 👍 1    🔁 1    💬 0    📌 0
Preview
The useful uselessness of art Reflections upon the discovery of the concept of "art therapy"

what is art for?
my girlfriend wrote about it in her new Substack!

open.substack.com/pub/nadiaend...

06.06.2025 20:42 — 👍 2    🔁 1    💬 0    📌 0
Preview
The useful uselessness of art Reflections upon the discovery of the concept of "art therapy"

what is art for?
my girlfriend wrote about it in her new Substack!

open.substack.com/pub/nadiaend...

06.06.2025 20:42 — 👍 2    🔁 1    💬 0    📌 0
Post image

still maximizing happiness, all these years later 🙏

30.05.2025 15:36 — 👍 5    🔁 1    💬 0    📌 0
Post image

𝗘𝘅𝗰𝗶𝘁𝗲𝗱 𝘁𝗼 𝗶𝗻𝘁𝗿𝗼𝗱𝘂𝗰𝗲 Adrià Moret 𝗮𝘀 𝗮 𝘀𝗽𝗲𝗮𝗸𝗲𝗿 𝗮𝘁 𝗔𝗜, 𝗔𝗻𝗶𝗺𝗮𝗹𝘀, & 𝗗𝗶𝗴𝗶𝘁𝗮𝗹 𝗠𝗶𝗻𝗱𝘀 𝗟𝗼𝗻𝗱𝗼𝗻 𝟮𝟬𝟮𝟱! 🔍

His philosophical writings bring valuable perspectives to #𝗔𝗜𝗔𝗗𝗠𝟮𝟱 on the intersection of digital minds, animal ethics, and ways to reduce risks to AI welfare.

🎟️ Register today! Link in bio.

09.05.2025 12:58 — 👍 5    🔁 1    💬 1    📌 0
Preview
Including Animal and AI Welfare in AI Alignment (RP Strategic Animal Webinars) · Zoom · Luma We’re launching the RP Strategic Animal Webinars series and want to invite you to our first talk and Q&A session: Including Animal and AI Welfare in AI…

What is (potentially) the best neglected and low-cost intervention to reduce future risks of harm to animals and digital minds?

Join me on this Rethink Priorities webinar (May 28th) to find out!

See lu.ma/gy3l2qek for further details.

01.05.2025 21:06 — 👍 2    🔁 1    💬 0    📌 0
Preview
Inicio Adrià Moret

I have a new website (sites.google.com/view/adria-m...), where I'll compile my papers (published and under review) and my upcoming presentations.

03.05.2025 16:05 — 👍 3    🔁 0    💬 0    📌 0
Post image

My paper "AI Welfare Risks" has been accepted for publication at Philosophical Studies!

I argue that near-future AIs may have welfare, that RL and behaviour restrictions could harm them, that this poses a tension with AI safety and how AI labs could reduce such welfare risks. 1/

01.05.2025 08:35 — 👍 10    🔁 3    💬 1    📌 1
Preview
Including Animal and AI Welfare in AI Alignment (RP Strategic Animal Webinars) · Zoom · Luma We’re launching the RP Strategic Animal Webinars series and want to invite you to our first talk and Q&A session: Including Animal and AI Welfare in AI…

What is (potentially) the best neglected and low-cost intervention to reduce future risks of harm to animals and digital minds?

Join me on this Rethink Priorities webinar (May 28th) to find out!

See lu.ma/gy3l2qek for further details.

01.05.2025 21:06 — 👍 2    🔁 1    💬 0    📌 0
Preview
Adrià Moret, AI Welfare Risks - PhilPapers In the coming years or decades, as frontier AI systems become more capable and agentic, it is increasingly likely that they meet the sufficient conditions to be welfare subjects under the ...

Here's the paper: philpapers.org/rec/MORAWR
For useful feedback, thanks to
@jeffsebo.bsky.social, @elliottthornley.bsky.social, @pmagana94.bsky.social and others. May also be of interest to @eze-pz.bsky.social, @eschwitz.bsky.social, @birchlse.bsky.social, @petersinger.info, @jacyanthis.bsky.social

01.05.2025 08:35 — 👍 4    🔁 1    💬 0    📌 0

@adriamoret is following 19 prominent accounts