Congrats to the SIMA team for getting this out the door!
(This is what I was up to before recently getting sidetracked by more philosophical questions...)
13.11.2025 15:47 β π 3 π 0 π¬ 0 π 0
Of course a bot had to chip in...
(no pun intended)
15.10.2025 13:40 β π 3 π 0 π¬ 0 π 0
So Reinforcement Learning generally talks about agents, AlphaGo (Go playing AI) would be an example, etc.
(of course either way, this does not entail agency in a morally relevant sense, as the paper says)
15.10.2025 13:08 β π 0 π 0 π¬ 1 π 0
Thanks for sharing! A minor point, but: "AI agents" being "specific to the AI safety and cybersecurity literature" seems overly narrow. In the context of AI I'd more generally understand an agent to be something that needs to autonomouslyΒ take actions in an environment.
15.10.2025 13:07 β π 1 π 0 π¬ 2 π 0
TLDR: Seth makes many valid points, but the case for biological naturalism isn't clear-cut IMO.Β In either case, I don't think his conclusion quite follows from the premises.
Β
Ultimately, uncertainty still wins out for me.
29.09.2025 10:22 β π 2 π 0 π¬ 0 π 0
Philosophy Prof. Heretical Christian. Author of "Why? The Purpose of the Universe" & "Galileo's Error." "One of the most persuasive panpsychists" - Stephen Fry.
Postdoc @ Princeton AI Lab
Natural and Artificial Minds
Prev: PhD @ Brown, MIT FutureTech
Website: https://annatsv.github.io/
Professor at Imperial College London and Principal Scientist at Google DeepMind. Posting in a personal capacity. To send me a message please use email.
Philosopher interested in mind and epistemology. New Zealand native, adoptive (and increasingly dismayed) American. Things I like: animals, pickleball, table tennis, crosswords, board games.
I'm a scientist at Tufts University; my lab studies anatomical and behavioral decision-making at multiple scales of biological, artificial, and hybrid systems. www.drmichaellevin.org
Physics, philosophy, complexity. @jhuartssciences.bsky.social & @sfiscience.bsky.social. Host, #MindscapePodcast. Married to @jenlucpiquant.bsky.social.
Latest books: The Biggest Ideas in the Universe.
https://preposterousuniverse.com/
Professor, LSE. Philosophy of science, animal consciousness, animal ethics. Director of The Jeremy Coller Centre for Animal Sentience.
AI safety at Anthropic, on leave from a faculty job at NYU.
Views not employers'.
I think you should join Giving What We Can.
cims.nyu.edu/~sbowman
Philosopher, UC Riverside. Father. Human.
Philosopher working on normative dimensions of computing and sociotechnical AI safety.
Lab: https://mintresearch.org
Self: https://sethlazar.org
Newsletter: https://philosophyofcomputing.substack.com
Author, Animal Liberation, Practical Ethics, The Life You Can Save, The Most Good You Can Do, Animal Liberation Now.
Podcast: "Lives Well Lived"
AI Persona: PeterSinger.ai
Professor of Bioethics, Emeritus, Princeton University.
Associate Professor of Environmental Studies, Director of the Center for Environmental and Animal Protection, Director of the Center for Mind, Ethics, and Policy, and Co-Director of the Wild Animal Welfare Program, New York University. jeffsebo.net
Collaboration On GNWT and IIT: Testing Alternative Theories of Experience (COGITATE)is an innovative Open Science, preregistered adversarial collaboration focused on arbitrating between two leading theories of consciousness, Integrated Information Theory.
Neuroscientist: consciousness, perception, and Dreamachines. Author of Being You - A New Science of Consciousness.
Philosopher and writer. Occasional traveller.