Nice! Really appreciate this β will check out Bramble
19.12.2025 13:15 β π 3 π 0 π¬ 0 π 0@harveylederman.bsky.social
Professor of philosophy UTAustin. Philosophical logic, formal epistemology, philosophy of language, Wang Yangming. www.harveylederman.com
Nice! Really appreciate this β will check out Bramble
19.12.2025 13:15 β π 3 π 0 π¬ 0 π 0Thanks Nick! Curious how it went :)
19.12.2025 08:19 β π 2 π 0 π¬ 1 π 0This essay is by far the best along its line, but the more I reflect about this stuff the more I think it's hard to hold an image of human experience, human thought, human understanding, human life, and human relationships as meaningful ends while also seeing them as dead-ends
06.11.2025 01:45 β π 26 π 5 π¬ 5 π 1Thanks for the kind words, and thoughtful response @peligrietzer.bsky.social ! I'm not here as much, but I put some responses on the other site: x.com/LedermanHarv...
07.11.2025 17:16 β π 4 π 0 π¬ 0 π 0Tiwald also has a nice academic article on this important topic, if you want to go deeper!
philpapers.org/rec/TIWGIO
As a Wang Yangming partisan, I cheered at this quote:
30.10.2025 11:30 β π 1 π 0 π¬ 1 π 0Enjoyed this nice piece by the great Justin Tiwald on autonomy and morality in Confucianism. Not sure I love the clickbait title, but I love the work Justin is doing uncovering views about moral deference and moral autonomy in (neo)Confucianism...
iai.tv/articles/the...
Essay here: scottaaronson.blog?p=9030
24.10.2025 16:01 β π 4 π 0 π¬ 0 π 2Very excited to be going to Chicago for
@agnescallard.bsky.social's famous Night Owls next week! I'll be discussing my essay "ChatGPT and the Meaning of Life". Hope to see you there if you're local!
UT Austin Linguistics is hiring in computational linguistics!
Asst or Assoc.
We have a thriving group sites.utexas.edu/compling/ and a long proud history in the space. (For instance, fun fact, Jeff Elman was a UT Austin Linguistics Ph.D.)
faculty.utexas.edu/career/170793
π€
π’
23.10.2025 00:35 β π 1 π 0 π¬ 1 π 0Could be but what about a not too but decent student β very bad incentives there I fear
22.10.2025 23:06 β π 2 π 0 π¬ 1 π 0the fear is that weβre under accounting for the fact that
the student who doesnβt want to use AI is currently punished because their peers do better by shortcuts
yes agree! you want to make the risk bad enough that it becomes less incentivized; current practice means that if you don't use it, you're being dumb...I want to change that
22.10.2025 18:48 β π 2 π 0 π¬ 1 π 0The professor I'm currently TAing for is making students use an extension called 'Process Feedback' that tracks key logs and time on the document: processfeedback.org
22.10.2025 17:33 β π 2 π 1 π¬ 0 π 0Yes
22.10.2025 17:14 β π 1 π 0 π¬ 0 π 0Yes I ask them to make me editor on their doc
22.10.2025 16:42 β π 3 π 0 π¬ 1 π 0oh? I have been using the βhistoryβ function there but it doesnβt track copy-paste
22.10.2025 16:37 β π 2 π 0 π¬ 3 π 0If you're doing *any* out of class assessment, you're incentivizing AI use and harming students who do the work themselves. But some day we have to assess writing again. The solution is monitored computer-labs. What Universities are building these? We need to push for them.
22.10.2025 16:06 β π 9 π 3 π¬ 3 π 0Totally agree! it's such a confusing and hard area. I fear that feelings run so high about it that many are (reasonably) steering clear of discussing it for fear of error. But IMO we need to get clearer in our thinking, even if that involves stumbles along the way.
17.10.2025 16:43 β π 3 π 1 π¬ 0 π 0Not actually relevant, but I don't eat meat (including fish), and I do delete AI chats all the time, so take that for what it's worth.
17.10.2025 16:38 β π 1 π 0 π¬ 1 π 0Thanks, I appreciate this. I hoped it was clear that the analogy is about illustrating that many think that uncertainty about what is a welfare subject can motivate action, not that "fish = AI". But ambiguity is in the eye of the reader and I'm sorry to hear it isn't/wasn't clear.
17.10.2025 16:38 β π 1 π 0 π¬ 1 π 0The analogy is clearly about risk! We say "It is *uncertain*". This uncertainty...it's clear that the point is about potential welfare subjects...
Your original post said we are "equating" them; I don't think that's a reasonable reading of this
This is not "equating" the moral status of the two as you originally said. It's an **analogy** about risk.
These are hard issues. I appreciate people have very strong feelings about them. But exactly for that reason it's important to be fair in issuing very strongly worded claims.
neutral on how welfare status and mentality are understood. That's a presentational issue not a misunderstanding.
2. We rote: "As an analogy, it is uncertain whether fish are welfare subjects. This uncertainty stops many people from eating fish, because they want to avoid the risk of moral harm."
1. This is very different from what you said in your original post. It is not misunderstanding "how AI works". I appreciate you would have done things differently than we did, but this isan unfair accusation. You would have liked functionalism to be a premise; we thought it was better to be...
17.10.2025 16:27 β π 0 π 0 π¬ 1 π 0We definitely don't make this "equation". We give an example to illustrate why potential moral subject-hood can matter to what we should do. An illustrative example is not an equation.
Your point about repeating is interesting. I don't share that view, but I understand it.
1.) Can you expand on how we've misunderstood how LLMs work?
2.) We draw out this consequence at the end of the piece. We (the authors) have different views on whether to accept the premise . But we're closer to you than you think we are -- we think this whole topic quickly becomes wild.
Anthropic recently announced that Claude, its AI chatbot, can end conversations with users to protect "AI welfare." Simon Goldstein and @harveylederman.bsky.social argue that this policy commits a moral error by potentially giving AI the capacity to kill itself.
17.10.2025 15:43 β π 12 π 7 π¬ 14 π 4Simon Goldstein and I have an op-ed live in Lawfare today! Anthropic's policy is premised on the idea that AI is a potential welfare subject. We argue that if you take that idea seriously (we don't take a stand on it here), the policy commits a moral mistake on its own terms.
17.10.2025 15:55 β π 7 π 1 π¬ 1 π 0