Harvey Lederman's Avatar

Harvey Lederman

@harveylederman.bsky.social

Professor of philosophy UTAustin. Philosophical logic, formal epistemology, philosophy of language, Wang Yangming. www.harveylederman.com

1,759 Followers  |  399 Following  |  257 Posts  |  Joined: 24.07.2023  |  1.8452

Latest posts by harveylederman.bsky.social on Bluesky

Nice! Really appreciate this β€” will check out Bramble

19.12.2025 13:15 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Thanks Nick! Curious how it went :)

19.12.2025 08:19 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This essay is by far the best along its line, but the more I reflect about this stuff the more I think it's hard to hold an image of human experience, human thought, human understanding, human life, and human relationships as meaningful ends while also seeing them as dead-ends

06.11.2025 01:45 β€” πŸ‘ 26    πŸ” 5    πŸ’¬ 5    πŸ“Œ 1

Thanks for the kind words, and thoughtful response @peligrietzer.bsky.social ! I'm not here as much, but I put some responses on the other site: x.com/LedermanHarv...

07.11.2025 17:16 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Justin Tiwald, β€œGetting It Oneself" (_Zide_ θ‡ͺεΎ—) as an Alternative to Testimonial Knowledge and Deference to Tradition - PhilPapers To morally defer is to form a moral belief on the basis of some credible authority's recommendation rather than on one’s own moral judgment. Many philosophers have suggested that the sort ...

Tiwald also has a nice academic article on this important topic, if you want to go deeper!

philpapers.org/rec/TIWGIO

30.10.2025 11:30 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

As a Wang Yangming partisan, I cheered at this quote:

30.10.2025 11:30 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
The radical independent thinking in Chinese philosophy | Justin Tiwald

Enjoyed this nice piece by the great Justin Tiwald on autonomy and morality in Confucianism. Not sure I love the clickbait title, but I love the work Justin is doing uncovering views about moral deference and moral autonomy in (neo)Confucianism...

iai.tv/articles/the...

30.10.2025 11:30 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
ChatGPT and the Meaning of Life: Guest Post by Harvey Lederman Scott Aaronson’s Brief Foreword: Harvey Lederman is a distinguished analytic philosopher who moved from Princeton to UT Austin a few years ago. Since his arrival, he’s become one of my …

Essay here: scottaaronson.blog?p=9030

24.10.2025 16:01 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 2
Post image

Very excited to be going to Chicago for
@agnescallard.bsky.social's famous Night Owls next week! I'll be discussing my essay "ChatGPT and the Meaning of Life". Hope to see you there if you're local!

24.10.2025 16:01 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
UT Austin Computational Linguistics Research Group – Humans processing computers processing humans processing language

UT Austin Linguistics is hiring in computational linguistics!

Asst or Assoc.

We have a thriving group sites.utexas.edu/compling/ and a long proud history in the space. (For instance, fun fact, Jeff Elman was a UT Austin Linguistics Ph.D.)

faculty.utexas.edu/career/170793

🀘

07.10.2025 20:53 β€” πŸ‘ 41    πŸ” 27    πŸ’¬ 1    πŸ“Œ 4

😒

23.10.2025 00:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Could be but what about a not too but decent student β€” very bad incentives there I fear

22.10.2025 23:06 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

the fear is that we’re under accounting for the fact that
the student who doesn’t want to use AI is currently punished because their peers do better by shortcuts

22.10.2025 19:00 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

yes agree! you want to make the risk bad enough that it becomes less incentivized; current practice means that if you don't use it, you're being dumb...I want to change that

22.10.2025 18:48 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
See how you write or use AI | Process Feedback Every Student’s Work Has a Story | Process Feedback enables teachers and students to see the writing process and AI usage. It helps students reflect on their writing and the role of AI.

The professor I'm currently TAing for is making students use an extension called 'Process Feedback' that tracks key logs and time on the document: processfeedback.org

22.10.2025 17:33 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Yes

22.10.2025 17:14 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Yes I ask them to make me editor on their doc

22.10.2025 16:42 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

oh? I have been using the β€œhistory” function there but it doesn’t track copy-paste

22.10.2025 16:37 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 3    πŸ“Œ 0

If you're doing *any* out of class assessment, you're incentivizing AI use and harming students who do the work themselves. But some day we have to assess writing again. The solution is monitored computer-labs. What Universities are building these? We need to push for them.

22.10.2025 16:06 β€” πŸ‘ 9    πŸ” 3    πŸ’¬ 3    πŸ“Œ 0

Totally agree! it's such a confusing and hard area. I fear that feelings run so high about it that many are (reasonably) steering clear of discussing it for fear of error. But IMO we need to get clearer in our thinking, even if that involves stumbles along the way.

17.10.2025 16:43 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Not actually relevant, but I don't eat meat (including fish), and I do delete AI chats all the time, so take that for what it's worth.

17.10.2025 16:38 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Thanks, I appreciate this. I hoped it was clear that the analogy is about illustrating that many think that uncertainty about what is a welfare subject can motivate action, not that "fish = AI". But ambiguity is in the eye of the reader and I'm sorry to hear it isn't/wasn't clear.

17.10.2025 16:38 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The analogy is clearly about risk! We say "It is *uncertain*". This uncertainty...it's clear that the point is about potential welfare subjects...

Your original post said we are "equating" them; I don't think that's a reasonable reading of this

17.10.2025 16:29 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This is not "equating" the moral status of the two as you originally said. It's an **analogy** about risk.

These are hard issues. I appreciate people have very strong feelings about them. But exactly for that reason it's important to be fair in issuing very strongly worded claims.

17.10.2025 16:27 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

neutral on how welfare status and mentality are understood. That's a presentational issue not a misunderstanding.

2. We rote: "As an analogy, it is uncertain whether fish are welfare subjects. This uncertainty stops many people from eating fish, because they want to avoid the risk of moral harm."

17.10.2025 16:27 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

1. This is very different from what you said in your original post. It is not misunderstanding "how AI works". I appreciate you would have done things differently than we did, but this isan unfair accusation. You would have liked functionalism to be a premise; we thought it was better to be...

17.10.2025 16:27 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We definitely don't make this "equation". We give an example to illustrate why potential moral subject-hood can matter to what we should do. An illustrative example is not an equation.

Your point about repeating is interesting. I don't share that view, but I understand it.

17.10.2025 16:16 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

1.) Can you expand on how we've misunderstood how LLMs work?

2.) We draw out this consequence at the end of the piece. We (the authors) have different views on whether to accept the premise . But we're closer to you than you think we are -- we think this whole topic quickly becomes wild.

17.10.2025 16:15 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Claude’s Right to Die? The Moral Error in Anthropic’s End-Chat Policy Anthropic has given its AI the right to end conversations when it is β€œdistressed.” But doing so could be akin to unintended suicide.

Anthropic recently announced that Claude, its AI chatbot, can end conversations with users to protect "AI welfare." Simon Goldstein and @harveylederman.bsky.social argue that this policy commits a moral error by potentially giving AI the capacity to kill itself.

17.10.2025 15:43 β€” πŸ‘ 12    πŸ” 7    πŸ’¬ 14    πŸ“Œ 4

Simon Goldstein and I have an op-ed live in Lawfare today! Anthropic's policy is premised on the idea that AI is a potential welfare subject. We argue that if you take that idea seriously (we don't take a stand on it here), the policy commits a moral mistake on its own terms.

17.10.2025 15:55 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

@harveylederman is following 20 prominent accounts