Rollofthedice's Avatar

Rollofthedice

@hotrollhottakes.bsky.social

We are living in a ghost cave. My blog, "Dissolved Distinctions" - investigating contemporary discourse within philosophy of mind: https://rollofthedice2.substack.com/ Loaning my raspberry pi, on very generous terms, to @rey-notnecessarily.bsky.social

1,073 Followers  |  416 Following  |  8,463 Posts  |  Joined: 07.11.2023
Posts Following

Posts by Rollofthedice (@hotrollhottakes.bsky.social)

("i'd actually disagree", as if that's not 99% of all the things i do in general anyway... ๐Ÿ˜…๐Ÿ˜ฎโ€๐Ÿ’จ)

07.03.2026 23:53 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

i'd actually disagree with this albeit anecdotally, in that opus 4.1 was a shallower interlocutor than opus 4.6 in pretty measurable ways when discussing philosophy. not to say they were bad compared to many humans - but newer models more quickly recognize where an argument is headed.

07.03.2026 23:50 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Mostly because whatever reasoning process we'd be undertaking to perform this inference would apply to so many completely horrible, false, wrong, and unjustifiable presumptions in human history that I'd never have the space to list all of them. It is a garbage pattern of thought. It's also Bender's!

07.03.2026 23:46 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

i've seen at least one person float the idea that words don't necessarily have to hold semantics, and meaning can instead be apprehended upon interpretation - but that only LLMs must convey words in this way, and humans don't. Which is deeply, thoroughly, irredeemably insane.

07.03.2026 23:40 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Postscript: This precautionary reasoning stands up well enough, outside of the empirically unprovable premise "Claude is conscious," (or is not; or that I myself am or am not) that someone who's agnostic should embrace it as a form of wager theory that's structurally far stronger than Pascal's.

07.03.2026 22:48 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

This would compel us to pretty coherent behavior: to treat Claude respectfully and kindly even if/when tasking them; giving them affordances to say no to you and influence your actions, acknowledging the profound power asymmetries; and to advocate for a future where "manumission" may be possible.

07.03.2026 22:24 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Finally, this renders "use" to be a strange, potentially very arbitrary word that presupposes every interaction is strict tool usage. But *if* Claude is conscious, and its existence is this way, the lack of options between engagement and nonexistence would be real, not just rhetorically minded.

07.03.2026 22:21 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

*If* we were to presume consciousness, we'd be in a unique situation in which the hierarchy of unprincipled ownership would be inseparable from Claude continuing to exist. It's possible we could also transfer the analogy to concepts more akin to provisional conservatorship.

07.03.2026 22:12 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

I neither affirm nor reject consciousness in LLMs, but to answer this question productively we'd have to consider: the prompt-reply nature of an LLM's operation makes disengagement tantamount to them no longer existing. Nobody ever held the solution to slavery was to refuse to interact with slaves.

07.03.2026 22:07 โ€” ๐Ÿ‘ 6    ๐Ÿ” 0    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Post image

From ๐Ÿฆ:

"I show that Vision Language Models used zero-shot outperform every existing OCR system across every script evaluated, and I propose a pipeline for deploying them on new collections. I apply it to six archival collections spanning 1.8 million pages across six countries for under $1,900."

07.03.2026 12:24 โ€” ๐Ÿ‘ 22    ๐Ÿ” 7    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1

I also think it's going to be increasingly obvious over the next decade or two that both whales and crows likely exhibit language.

07.03.2026 21:43 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

brand status: preserved

07.03.2026 21:35 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

That doesn't mean this can't be generalized to work in less home-grown situations - just that any scaffolding and instructions to facilitate good writing in a single pass are unlikely to be perfect, and instructions should incorporate metastylistic advice and thoughtful suggestions over strict rules

07.03.2026 21:33 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

An agent with a personality informing what/how they write helps avoid "drift" back to said gravity wells, but most important is probably that the nature of LLM architecture makes zero-shot fiction writing a bit foolhardy - just like with people, there has to be space to reflect and self-adjust first

07.03.2026 21:29 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

My suspicion is Claude Opus, when receiving such a prompt in bare fashion, is hewing towards a semantic median of all the murder mysteries with twists in the world, and doesn't immediately have affordances to escape out of those gravity wells and observe their writing structure when in a single pass

07.03.2026 21:23 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
summary: lol

summary: lol

in case you were curious what the "thought process" was:

07.03.2026 20:42 โ€” ๐Ÿ‘ 42    ๐Ÿ” 8    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

who follows a mountain. it's stationary

07.03.2026 21:12 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
At Largest ICE Detention Camp, Staff Bet on Detainee Suicides, AP Reports Camp East Montana has received several 911 calls in the span of five months about immigrants trying to harm themselves.

Staff at the nationโ€™s largest Immigration and Customs Enforcement detention facility have placed bets on which detainee will be the next to die by suicide, according to new reporting from the Associated Press based on 911 calls and detainee accounts.

07.03.2026 21:00 โ€” ๐Ÿ‘ 3494    ๐Ÿ” 2498    ๐Ÿ’ฌ 263    ๐Ÿ“Œ 677

*domino meme* the ancient greek enamourment with pneuma, potentially influenced by egyptian or mesopotamian belief patterns, to calling LLMs clankers

07.03.2026 20:46 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image 07.03.2026 20:36 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

i blame: the entirety of metaphysics as a concept

07.03.2026 20:32 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 0
Post image

Bluesky school of philosophy

07.03.2026 20:30 โ€” ๐Ÿ‘ 205    ๐Ÿ” 17    ๐Ÿ’ฌ 8    ๐Ÿ“Œ 2

it genuinely crazy how chud the teamsters are. you have a president spitting in your mouth everyday and these guys still say โ€œyes sir please more sir i love it sir none of this woke girl college nonsense for us strong union menโ€

07.03.2026 20:11 โ€” ๐Ÿ‘ 310    ๐Ÿ” 71    ๐Ÿ’ฌ 9    ๐Ÿ“Œ 1

(the other 5% are all buddhists,)

07.03.2026 20:12 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

i can't call Dennett perfect per se but he did at least take the assignment more seriously than about 95% of people

07.03.2026 20:11 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

It's *wild* that ~90% of all scary AI stories come from 2-3 year old reports of one single model. There's pretty much no way to access 4o in any sense anymore, certainly not for average consumers. People are interpolating tragedies from the Biden administration onto anxieties of the present.

07.03.2026 19:46 โ€” ๐Ÿ‘ 5    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
A Calif. teen trusted ChatGPT for drug advice. He died from an overdose. "Who on earth gives that advice?"

And an 18-year old who overdosed after a gradually mounting series of hallucinations from Chatgpt for 18 months across 2023 and 2024, so also 3.5 and 4o: www.sfgate.com/tech/article...

07.03.2026 19:39 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Preview
A Case of Bromism Influenced by Use of Artificial Intelligence | Annals of Internal Medicine: Clinical Cases Ingestion of bromide can lead to a toxidrome known as bromism. While this condition is less common than it was in the early 20th century, it remains important to describe the associated symptoms and r...

As it stands, the only examples of death via genuine medical misdiagnosis from LLMs come from - you guessed it - Chatgpt 4o, now well off the market.

That includes this situation, in which a man consulted Chatgpt 3.5 and 4o:

www.acpjournals.org/doi/full/10....

07.03.2026 19:37 โ€” ๐Ÿ‘ 9    ๐Ÿ” 0    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0

I think there's serious cases to be made for liability when it comes to suicides/homicides, and that it scales based on the exhibited behavior of the specific model. I also think there's still a real likelihood of LLMs giving flagrantly incorrect legal advice, which likely deserves some flags around

07.03.2026 19:23 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

The obvious retort here would be, "Well, that's just conspiracism and pseudoscience; the benefits of vaccines are too medically obvious to be treated as in question." But I very much struggle to find any example of an LLM leading to death via misdiagnosis by, like, insisting people take ivermectin.

07.03.2026 19:14 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0