Leonard Dung's Avatar

Leonard Dung

@leonarddung.bsky.social

Philosopher of cognition at the Ruhr-University Bochum. I work mainly on consciousness, AI, and animals. https://sites.google.com/view/leonard-dung/home

253 Followers  |  521 Following  |  2 Posts  |  Joined: 18.11.2024  |  1.4817

Latest posts by leonarddung.bsky.social on Bluesky

Preview
LSE announces new centre to study animal sentience The Jeremy Coller Centre for Animal Sentience at LSE will develop new approaches to studying the feelings of other animals scientifically.

An emotional day - I can announce I'll be the first director of The Jeremy Coller Centre for Animal Sentience at the LSE, supported by a £4m grant from the Jeremy Coller Foundation. Our mission: to develop better policies, laws and ways of caring for animals. (1/2)
www.lse.ac.uk/News/Latest-...

25.03.2025 10:42 — 👍 696    🔁 131    💬 53    📌 11
Post-Doctoral Associate/Research Scientist, New York University - PhilJobs:JFP Post-Doctoral Associate/Research Scientist, New York University An international database of jobs for philosophers

Two or more 2-year Postdoc / Research Scientist positions at NYU to work on issues tied to artificial consciousness. Strong research track record with expertise in AI expected. No teaching. Salary around $62K. Details and application materials are here philjobs.org/job/show/28878

16.03.2025 13:02 — 👍 8    🔁 2    💬 1    📌 0
Preview
Events Spring 2025

The NYU Wild Animal Welfare Program is thrilled to be hosting an online panel with Heather Browning and Oscar Horta on March 19 at 12pm ET! This event will settle once and for all the question whether wild animal welfare is net positive or negative. RSVP below :)

sites.google.com/nyu.edu/wild...

04.03.2025 14:40 — 👍 22    🔁 6    💬 0    📌 2
Post image

Should we care more about shrimp?

www.slowboring.com/p/mailbag-mo...

28.02.2025 12:48 — 👍 75    🔁 6    💬 11    📌 1

In philosophy at least, this is uncommon but the editor would be fully within their rights. The reviewers only make recommendations; the editors are supposed to use their independent judgement when appropriate.

20.02.2025 13:33 — 👍 1    🔁 0    💬 0    📌 0
Preview
François Kammerer, Defining consciousness and denying its existence. Sailing between Charybdis and Scylla - PhilPapers Ulysses, the strong illusionist, sails towards the Strait of Definitions. On his left, Charybdis defines “phenomenal consciousness” in a loaded manner, which makes it a problematic entity from a physi...

I have a paper out in Philosophical Studies. It addresses a common (and old) objection to illusionism about phenomenal consciousness philpapers.org/rec/KAMDCA-2 1/x

18.02.2025 13:56 — 👍 47    🔁 13    💬 7    📌 1
Preview
LLMs and World Models, Part 1 How do Large Language Models Make Sense of Their “Worlds”?

Do large language models develop "emergent" models of the world? My latest Substack posts explore this claim and more generally the nature of "world models":

LLMs and World Models, Part 1: aiguide.substack.com/p/llms-and-w...

LLMs and World Models, Part 2: aiguide.substack.com/p/llms-and-w...

13.02.2025 22:30 — 👍 213    🔁 59    💬 14    📌 10
Post image

I can’t believe it - after years of advocacy, exclusionary zoning has ended in Cambridge.

We just passed the single most comprehensive rezoning in the US—legalizing multifamily housing up to 6 stories citywide in a Paris style

Here’s the details 🧵

11.02.2025 01:46 — 👍 2470    🔁 497    💬 66    📌 236

This is one reason why I think manuscripts should contain a robustness checks section. This would make it normal for researchers to conduct additional analyses, and for reviewers to request additional analyses, that ask: if key analyses are done this other reasonable way, are the results different?

07.02.2025 14:24 — 👍 13    🔁 4    💬 1    📌 0
Evaluating Artificial Consciousness 2025 - Sciencesconf.org

Call for abstracts: Workshop “Evaluating Artificial Consciousness”:
eac-2025.sciencesconf.org
10-11 June 2025 at RUB Bochum
#PhilMind #consciousness #consci #sentience #Ethics #CogSci

17.01.2025 14:18 — 👍 17    🔁 8    💬 1    📌 2

Cherish every day this thing isn't spreading from human to human.

27.12.2024 16:48 — 👍 10    🔁 2    💬 0    📌 0
Preview
Did OpenAI Just Solve Abstract Reasoning? OpenAI’s o3 model aces the "Abstraction and Reasoning Corpus" — but what does it mean?

Some of my thoughts on OpenAI's o3 and the ARC-AGI benchmark

aiguide.substack.com/p/did-openai...

23.12.2024 14:38 — 👍 342    🔁 99    💬 17    📌 27
Title card: Alignment Faking in Large Language Models by Greenblatt et al.

Title card: Alignment Faking in Large Language Models by Greenblatt et al.

New work from my team at Anthropic in collaboration with Redwood Research. I think this is plausibly the most important AGI safety result of the year. Cross-posting the thread below:

18.12.2024 17:46 — 👍 126    🔁 29    💬 7    📌 11

New in print: "Let's Hope We're Not Living in a Simulation":

In Reality+, David Chalmers suggests that it wouldn't be too bad if we lived in a computer simulation. I argue on the contrary that if we live in a simulation, we ought to attach a significant conditional credence to 1/3

17.12.2024 19:21 — 👍 36    🔁 7    💬 1    📌 0
Preview
Towards ending the animal cognition war: a three-dimensional model of causal cognition - Biology & Philosophy Debates in animal cognition are frequently polarized between the romantic view that some species have human-like causal understanding and the killjoy view that human causal reasoning is unique. These ...

Nice list. These two come to mind: Causal cognition link.springer.com/article/10.1... and behavioral innovation www.cambridge.org/core/journal...

23.11.2024 09:31 — 👍 1    🔁 0    💬 1    📌 0

@leonarddung is following 20 prominent accounts