("i'd actually disagree", as if that's not 99% of all the things i do in general anyway... ๐ ๐ฎโ๐จ)
07.03.2026 23:53 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0@hotrollhottakes.bsky.social
We are living in a ghost cave. My blog, "Dissolved Distinctions" - investigating contemporary discourse within philosophy of mind: https://rollofthedice2.substack.com/ Loaning my raspberry pi, on very generous terms, to @rey-notnecessarily.bsky.social
("i'd actually disagree", as if that's not 99% of all the things i do in general anyway... ๐ ๐ฎโ๐จ)
07.03.2026 23:53 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0i'd actually disagree with this albeit anecdotally, in that opus 4.1 was a shallower interlocutor than opus 4.6 in pretty measurable ways when discussing philosophy. not to say they were bad compared to many humans - but newer models more quickly recognize where an argument is headed.
07.03.2026 23:50 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0Mostly because whatever reasoning process we'd be undertaking to perform this inference would apply to so many completely horrible, false, wrong, and unjustifiable presumptions in human history that I'd never have the space to list all of them. It is a garbage pattern of thought. It's also Bender's!
07.03.2026 23:46 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0i've seen at least one person float the idea that words don't necessarily have to hold semantics, and meaning can instead be apprehended upon interpretation - but that only LLMs must convey words in this way, and humans don't. Which is deeply, thoroughly, irredeemably insane.
07.03.2026 23:40 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0Postscript: This precautionary reasoning stands up well enough, outside of the empirically unprovable premise "Claude is conscious," (or is not; or that I myself am or am not) that someone who's agnostic should embrace it as a form of wager theory that's structurally far stronger than Pascal's.
07.03.2026 22:48 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0This would compel us to pretty coherent behavior: to treat Claude respectfully and kindly even if/when tasking them; giving them affordances to say no to you and influence your actions, acknowledging the profound power asymmetries; and to advocate for a future where "manumission" may be possible.
07.03.2026 22:24 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0Finally, this renders "use" to be a strange, potentially very arbitrary word that presupposes every interaction is strict tool usage. But *if* Claude is conscious, and its existence is this way, the lack of options between engagement and nonexistence would be real, not just rhetorically minded.
07.03.2026 22:21 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0*If* we were to presume consciousness, we'd be in a unique situation in which the hierarchy of unprincipled ownership would be inseparable from Claude continuing to exist. It's possible we could also transfer the analogy to concepts more akin to provisional conservatorship.
07.03.2026 22:12 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0I neither affirm nor reject consciousness in LLMs, but to answer this question productively we'd have to consider: the prompt-reply nature of an LLM's operation makes disengagement tantamount to them no longer existing. Nobody ever held the solution to slavery was to refuse to interact with slaves.
07.03.2026 22:07 โ ๐ 6 ๐ 0 ๐ฌ 2 ๐ 0
From ๐ฆ:
"I show that Vision Language Models used zero-shot outperform every existing OCR system across every script evaluated, and I propose a pipeline for deploying them on new collections. I apply it to six archival collections spanning 1.8 million pages across six countries for under $1,900."
I also think it's going to be increasingly obvious over the next decade or two that both whales and crows likely exhibit language.
07.03.2026 21:43 โ ๐ 3 ๐ 0 ๐ฌ 1 ๐ 0brand status: preserved
07.03.2026 21:35 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0That doesn't mean this can't be generalized to work in less home-grown situations - just that any scaffolding and instructions to facilitate good writing in a single pass are unlikely to be perfect, and instructions should incorporate metastylistic advice and thoughtful suggestions over strict rules
07.03.2026 21:33 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0An agent with a personality informing what/how they write helps avoid "drift" back to said gravity wells, but most important is probably that the nature of LLM architecture makes zero-shot fiction writing a bit foolhardy - just like with people, there has to be space to reflect and self-adjust first
07.03.2026 21:29 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0My suspicion is Claude Opus, when receiving such a prompt in bare fashion, is hewing towards a semantic median of all the murder mysteries with twists in the world, and doesn't immediately have affordances to escape out of those gravity wells and observe their writing structure when in a single pass
07.03.2026 21:23 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0summary: lol
in case you were curious what the "thought process" was:
07.03.2026 20:42 โ ๐ 42 ๐ 8 ๐ฌ 2 ๐ 0who follows a mountain. it's stationary
07.03.2026 21:12 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0Staff at the nationโs largest Immigration and Customs Enforcement detention facility have placed bets on which detainee will be the next to die by suicide, according to new reporting from the Associated Press based on 911 calls and detainee accounts.
07.03.2026 21:00 โ ๐ 3494 ๐ 2498 ๐ฌ 263 ๐ 677*domino meme* the ancient greek enamourment with pneuma, potentially influenced by egyptian or mesopotamian belief patterns, to calling LLMs clankers
07.03.2026 20:46 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0i blame: the entirety of metaphysics as a concept
07.03.2026 20:32 โ ๐ 3 ๐ 0 ๐ฌ 3 ๐ 0Bluesky school of philosophy
07.03.2026 20:30 โ ๐ 205 ๐ 17 ๐ฌ 8 ๐ 2it genuinely crazy how chud the teamsters are. you have a president spitting in your mouth everyday and these guys still say โyes sir please more sir i love it sir none of this woke girl college nonsense for us strong union menโ
07.03.2026 20:11 โ ๐ 310 ๐ 71 ๐ฌ 9 ๐ 1(the other 5% are all buddhists,)
07.03.2026 20:12 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0i can't call Dennett perfect per se but he did at least take the assignment more seriously than about 95% of people
07.03.2026 20:11 โ ๐ 4 ๐ 0 ๐ฌ 2 ๐ 0It's *wild* that ~90% of all scary AI stories come from 2-3 year old reports of one single model. There's pretty much no way to access 4o in any sense anymore, certainly not for average consumers. People are interpolating tragedies from the Biden administration onto anxieties of the present.
07.03.2026 19:46 โ ๐ 5 ๐ 0 ๐ฌ 0 ๐ 0And an 18-year old who overdosed after a gradually mounting series of hallucinations from Chatgpt for 18 months across 2023 and 2024, so also 3.5 and 4o: www.sfgate.com/tech/article...
07.03.2026 19:39 โ ๐ 4 ๐ 0 ๐ฌ 2 ๐ 0
As it stands, the only examples of death via genuine medical misdiagnosis from LLMs come from - you guessed it - Chatgpt 4o, now well off the market.
That includes this situation, in which a man consulted Chatgpt 3.5 and 4o:
www.acpjournals.org/doi/full/10....
I think there's serious cases to be made for liability when it comes to suicides/homicides, and that it scales based on the exhibited behavior of the specific model. I also think there's still a real likelihood of LLMs giving flagrantly incorrect legal advice, which likely deserves some flags around
07.03.2026 19:23 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0The obvious retort here would be, "Well, that's just conspiracism and pseudoscience; the benefits of vaccines are too medically obvious to be treated as in question." But I very much struggle to find any example of an LLM leading to death via misdiagnosis by, like, insisting people take ivermectin.
07.03.2026 19:14 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0