Infographic detailing seven ways to create a bird-friendly community, including planting native plants, turning off lights, avoiding pesticides, and preventing collisions.
Urban expansion can lead to habitat loss. Yet, as IPBES #GlobalAssessment shows, smart city planning + nature-based solutions can create resilient, bird-friendly cities.๐
This #WorldMigratoryBirdDay, letโs protect migratory birds in our #SharedSpaces ๐ก
๐งชhttps://www.worldmigratorybirdday.org/
11.10.2025 12:20 โ ๐ 49 ๐ 23 ๐ฌ 0 ๐ 2
here are some thoughts about the rights of future generations, captured from an email i wrote to a colleague this morning. reactions welcome!
11.10.2025 11:42 โ ๐ 5 ๐ 1 ๐ฌ 1 ๐ 0
Jane Goodallโs most radical message was not about saving the planet
The conservationist used her stature to advocate for one of the most important, yet most unpopular, causes in the world.
Goodall was such an icon because she did & said things that were heresy in the scientific community. Her work on animals' capacities represented not just an abstract finding but a practical ethic that led her to advocate for veganism &vocally oppose animal experimentation
www.vox.com/future-perfe...
02.10.2025 15:38 โ ๐ 62 ๐ 21 ๐ฌ 1 ๐ 0
YouTube video by TEDx Talks
Are we even prepared for a sentient AI? | Jeff Sebo | TEDxNewEngland
"The only responsible stance is humility." @jeffsebo.bsky.social about the possibility of AI sentience. youtu.be/yEfvhjujKSY
26.09.2025 09:08 โ ๐ 4 ๐ 1 ๐ฌ 0 ๐ 0
YouTube video by TEDx Talks
Are we even prepared for a sentient AI? | Jeff Sebo | TEDxNewEngland
Will AI systems ever become sentient? How should we treat them if we feel uncertain? Check out @jeffsebo.bsky.social's TEDx talk on AI sentience.
youtu.be/yEfvhjujKSY
25.09.2025 17:57 โ ๐ 4 ๐ 1 ๐ฌ 0 ๐ 0
YouTube video by NYU Center for Mind, Ethics, and Policy
Cass Sunstein, "A Bill of Rights for Animals"
The NYU Center for Mind, Ethics, and Policy recently hosted Cass Sunstein for a public talk on a bill of rights for animals. The recording is now online โ feel free to share with anyone who may have interest!
youtube.com/watch?v=P2Y16xw8sZ4&feature=youtu.be
23.09.2025 11:03 โ ๐ 5 ๐ 3 ๐ฌ 0 ๐ 1
The consequences of letting avian influenza run rampant in US poultry
The approach proposed by a high-ranking US government official would be dangerous and unethical
Ann Linder, Colin Jerolmack, and I published a letter in Science about the importance of addressing industrial animal agriculture's public health impacts when developing a response to bird flu. (You can find it below the target article, as a response.)
20.09.2025 14:13 โ ๐ 7 ๐ 2 ๐ฌ 0 ๐ 1
The Emotional Alignment Design Policy
According to what we call the Emotional Alignment Design Policy, artificial entities should be designed to elicit emotional reactions from users that appropriately reflect the entities' capacities and...
20/
- The Emotional Alignment Design Policy arxiv.org/abs/2507.06263
- Is There a Tension between AI Safety and AI Welfare?
link.springer.com/article/10.1...
โ What Will Society Think about AI Consciousness?
sciencedirect.com/science/arti...
โ When an AI Seems Conscious
whenaiseemsconscious.org
19.09.2025 17:31 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0
19/ You can find my talk on AI welfare here:
tedxnewengland.com/speakers/jef...
Hope you enjoy! For more, see:
- The Moral Circle
wwnorton.com/books/978132...
โ Moral Consideration for AI Systems by 2030
link.springer.com/article/10.1...
โ Taking AI Welfare Seriously
arxiv.org/abs/2411.00986
19.09.2025 17:31 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
18/ If there are risks in both directions, then we should consider them both, not consider one while neglecting the other. And even if the risk of under-attribution is low now, it may increase fast. We can, and should, address current problems while preparing for future ones.
19.09.2025 17:31 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
17/ However, Suleyman also describes our work on moral consideration for near-future AI as โpremature, and frankly dangerous,โ implying that we should consider and mitigate over-attribution risks but not under-attribution risks at present. Here we disagree.
19.09.2025 17:31 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
16/ FWIW, I agree with Suleyman on many issues, including: (1) Over-attribution risks are more likely at present, (2) We should avoid creating sentient AI unless we can do so responsibly, and (3) We should avoid creating non-sentient AI that seems sentient.
19.09.2025 17:31 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
We must build AI for people; not to be a person
15/ Public attention has exploded as well. Many now experience chatbots as sentient, and experts are rightly sounding the alarm about over-attribution risks, including Microsoft AI CEO Mustafa Suleyman in his recent essay on โseemingly conscious AI.โ
mustafa-suleyman.ai/seemingly-co...
19.09.2025 17:31 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
14/ I recorded this talk last year. Since then, we released โTaking AI Welfare Seriously,โ and Anthropic hired one of the authors as an AI welfare researcher, launched an AI welfare program, and (with Eleos AI) conducted AI welfare evals. Other actors entered the space too.
19.09.2025 17:31 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
13/ For the rest of us: We can accept that we may be the first generation to co-exist with real sentient AI. Either way, we can expect to keep making mistakes about AI sentience. Preparing now โ cultivating calibrated attitudes and reactions โ is important for everyone.
19.09.2025 17:31 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
12/ For companies and governments, taking AI welfare seriously means acknowledging that AI welfare is a credible issue, assessing AI systems for welfare-relevant features, and preparing policies for treating AI systems with an appropriate level of moral concern.
19.09.2025 17:31 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
11/ We use these tools in a variety of domains. We do it to address drug side effects, pandemic risks, and climate change risks. Increasingly, we do it to address animal welfare risks and AI safety risks. In the future, we can do it to address AI welfare risks as well.
19.09.2025 17:31 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
10/ Fortunately, we have tools for making high-stakes decisions with uncertain outcomes. When there is a non-negligible chance that an action or policy will cause harm, we can assess the evidence and take reasonable, proportionate steps to mitigate risk.
19.09.2025 17:31 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
9/ Yet if this analysis is correct, then AI sentience is not an issue only for sci-fi or the distant future. There is at least a non-negligible chance that AI systems with real feelings could emerge in the near future, given current evidence. What do we do with that possibility?
19.09.2025 17:31 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
8/ This situation calls for humility as well. Even if you feel confident that progress in AI will slow from here, you should allow for at least a realistic chance that it will speed up or stay the same, and that AI systems with human-like capabilities will exist by, say, 2035.
19.09.2025 17:31 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
7/ We may not know until it happens. Technology is hard to predict. In 2015, many doubted that AI systems would be able to have conversations, produce essays and music, and pass standardized tests in a range of fields within a decade. Yet here we are.
19.09.2025 17:31 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
6/ Second, we have uncertainty about the future of AI. Companies are spending billions on progress. They aim for intelligence, not sentience, but intelligence and sentience may overlap. Some think AI will slow down, others think it will speed up. Which view is right?
19.09.2025 17:31 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
5/ This situation calls for humility. We may lean one way or the other, but we should keep an open mind. Even if you feel confident that only biological beings can feel, you should allow for at least a realistic chance that sufficiently advanced artificial beings can feel, too.
19.09.2025 17:31 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
4/ We may never know for sure. The only mind that any of us can directly access is our own, and we have a lot of bias and ignorance about other minds, including a tendency to (a) over-attribute sentience to some nonhumans and (b) under-attribute it to others.
19.09.2025 17:31 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
3/ First, we have uncertainty about the nature of sentience. Some experts think that only biological, carbon-based beings can have feelings. Others think that sufficiently advanced artificial, silicon-based beings can have feelings too. Which view is right?
19.09.2025 17:31 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
2/ Based on my 2024 report with Robert Long and others, this talk makes three basic points: (1) we have deep uncertainty about the nature of sentience, (2) we have deep uncertainty about the future of AI, and (3) when in doubt, we should exercise caution.
tedxnewengland.com/speakers/jef...
19.09.2025 17:31 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
1/ My TEDx talk โWhat Do We Owe AI?โ is now live! AI is advancing fast, and our relationships with AI systems are changing too. Some think AI could soon be sentient and deserve care. Are they right? What if the only honest answer is: maybe? ๐งต+๐๐
19.09.2025 17:31 โ ๐ 4 ๐ 2 ๐ฌ 1 ๐ 0
good morning!
17.09.2025 12:57 โ ๐ 5 ๐ 0 ๐ฌ 0 ๐ 0
6 Books We Loved This Week
More NYT coverage of THE ARROGANT APE: www.nytimes.com/2025/09/04/b...
05.09.2025 11:39 โ ๐ 6 ๐ 3 ๐ฌ 1 ๐ 0
Journalists sometimes ask me to share insights about animal consciousness that have emerged since the New York Declaration on Animal Consciousness in 2024. I would love to have answers for them, especially regarding non-invasive methods and research on positive welfare. Leads welcome!
29.08.2025 14:35 โ ๐ 2 ๐ 1 ๐ฌ 0 ๐ 0
AI researcher at Google DeepMind -- views on here are my own.
Interested in cognition & AI, consciousness, ethics, figuring out the future.
Aeon is a magazine of ideas and culture. Visit aeon.co for more.
Assistant Professor, Environmental Studies, NYU
Primatologist, Author of THE ARROGANT APE
https://www.cewebb.com
Philosopher. Here to watch
CEO of Rethink Priorities
Charity for All: amarcusdavis.substack.com
Author, Animal Liberation, Practical Ethics, The Life You Can Save, The Most Good You Can Do, Animal Liberation Now.
Podcast: "Lives Well Lived"
AI Persona: PeterSinger.ai
Professor of Bioethics, Emeritus, Princeton University.
Neurogeneticist interested in the relations between genes, brains, and minds. Author of INNATE (2018) and FREE AGENTS (2023)
Assistant Prof in Public Law, University of Cambridge. Fellow of Jesus College. Co-Director of the Cambridge Centre for Animal Rights Law. Co-Carer of twins, a blind dog, and a one-toothed cat.
Philosopher of Aesthetics at Cardiff University
The Ramsey philosophy of biology lab at KU Leuven, Belgium.
https://www.theramseylab.org โข #HPbio #philsci #philsky #evosky #paleosky #cogsci
We support research that can inform decision-making on the most pressing problems facing arthropods. https://www.arthropodafoundation.org/
neuroscientist ๐ง | Director @ https://www.scienceadvancement.org/ | Moving U.S. biomedical research toward humans & away from other animals | she/her/hers
Your chronically ill bestie โค๏ธโ๐ฉน
Endometriosis + adenomyosis awareness ๐
Learning to be the holder and the held ๐
MSW ๐
Philosopher of mind/cog sci studying animal minds, memory,
consciousness, temporal representation, & implications for animal ethics/policy.
Asst. Prof. @ Ashoka University, India. Formerly postdoc @ London School of Economics & Johns Hopkins University
We empower animal advocates to be more effective through research, analysis, strategies, and messages. Support us in fostering a data-driven animal movement.
๐ฆ Researcher at @wildanimalinitv.bsky.social
๐ #AnimalWelfare Postdoc at @newcastleuni.bsky.social
Opinions are my own (she/her)
Veteran film/TV/media maker; NYT bestselling author; storyteller; dog mom. Direct descendant John Alden & Priscilla Mullins 1620. My American ancestors were defending democracy since 1776. ๐บ๐ธ ๐ฎ๐ช & #Ally ๐บ๐ฆ ๐จ๐ฆ ๐ณ๏ธโ๐ ๐ณ๏ธโโง๏ธ
Professor @ Boston U, History & Philosophy of Scienceโesp Philosophy of Geosciences, Director Phi-Geo Research Group, Assoc @ Harvard U. Hist Sci, Radcliffe Fellow Alum. Settler, wife, & mother. ๐ธs my crappy phone.
Webpage: https://bokulich.org/
Committed to analyzing & improving the treatment of animals through the legal system, and to fostering discourse, facilitating scholarship, developing strategic solutions, and building innovative bridges between theory and practice.