An emotional day - I can announce I'll be the first director of The Jeremy Coller Centre for Animal Sentience at the LSE, supported by a £4m grant from the Jeremy Coller Foundation. Our mission: to develop better policies, laws and ways of caring for animals. (1/2)
www.lse.ac.uk/News/Latest-...
Two or more 2-year Postdoc / Research Scientist positions at NYU to work on issues tied to artificial consciousness. Strong research track record with expertise in AI expected. No teaching. Salary around $62K. Details and application materials are here philjobs.org/job/show/28878
The NYU Wild Animal Welfare Program is thrilled to be hosting an online panel with Heather Browning and Oscar Horta on March 19 at 12pm ET! This event will settle once and for all the question whether wild animal welfare is net positive or negative. RSVP below :)
sites.google.com/nyu.edu/wild...
Should we care more about shrimp?
www.slowboring.com/p/mailbag-mo...
In philosophy at least, this is uncommon but the editor would be fully within their rights. The reviewers only make recommendations; the editors are supposed to use their independent judgement when appropriate.
I have a paper out in Philosophical Studies. It addresses a common (and old) objection to illusionism about phenomenal consciousness philpapers.org/rec/KAMDCA-2 1/x
Do large language models develop "emergent" models of the world? My latest Substack posts explore this claim and more generally the nature of "world models":
LLMs and World Models, Part 1: aiguide.substack.com/p/llms-and-w...
LLMs and World Models, Part 2: aiguide.substack.com/p/llms-and-w...
I can’t believe it - after years of advocacy, exclusionary zoning has ended in Cambridge.
We just passed the single most comprehensive rezoning in the US—legalizing multifamily housing up to 6 stories citywide in a Paris style
Here’s the details 🧵
This is one reason why I think manuscripts should contain a robustness checks section. This would make it normal for researchers to conduct additional analyses, and for reviewers to request additional analyses, that ask: if key analyses are done this other reasonable way, are the results different?
Call for abstracts: Workshop “Evaluating Artificial Consciousness”:
eac-2025.sciencesconf.org
10-11 June 2025 at RUB Bochum
#PhilMind #consciousness #consci #sentience #Ethics #CogSci
Cherish every day this thing isn't spreading from human to human.
Some of my thoughts on OpenAI's o3 and the ARC-AGI benchmark
aiguide.substack.com/p/did-openai...
New work from my team at Anthropic in collaboration with Redwood Research. I think this is plausibly the most important AGI safety result of the year. Cross-posting the thread below:
New in print: "Let's Hope We're Not Living in a Simulation":
In Reality+, David Chalmers suggests that it wouldn't be too bad if we lived in a computer simulation. I argue on the contrary that if we live in a simulation, we ought to attach a significant conditional credence to 1/3
Nice list. These two come to mind: Causal cognition link.springer.com/article/10.1... and behavioral innovation www.cambridge.org/core/journal...