The blurb from John Fowles was an anti-recommendation, which I happily ignored. The Magus is still the worst book I've ever read.
07.10.2025 13:02 β π 1 π 0 π¬ 0 π 0@multiplicityct.bsky.social
PhD student in philosophy at the University of Staffordshire. Heidegger, analytic ethics (trust and mistrust), philosophy of tech/AI. Marylander. MA Staffs, MBA Duke. Wittgenstein and Cantor handshake numbers = 3 (via John Conway).
The blurb from John Fowles was an anti-recommendation, which I happily ignored. The Magus is still the worst book I've ever read.
07.10.2025 13:02 β π 1 π 0 π¬ 0 π 0Just started this last night, and it's really entertaining so far...
07.10.2025 13:02 β π 2 π 0 π¬ 1 π 0This looks interesting...
07.10.2025 12:13 β π 5 π 0 π¬ 0 π 0There are three or four songs from "Life of a Showgirl" that I can't get out of my head since this weekend.
Lots of negativity from the critics, but I'll admit, one of my all-time favorite songs is Joe Diffie's "Pickup Man". Not every banger needs to be Nobel Prize-worthy writing.
I'm sure he has said more since, but I assume he's referring to the ideas in the New Yorker piece from last year.
07.10.2025 11:48 β π 1 π 0 π¬ 0 π 0It turns out that talented practitioners of X are not necessarily talented philosophers of X, for any given X.
07.10.2025 03:43 β π 11 π 0 π¬ 1 π 0Those are narrow conditions, which lots of human persons donβt satisfy. But yes, I think there has to be a setting-aside of traditional philosophical intuitions to make sense of these things. Theyβre not βmereβ objects even though they are also not persons. Not sure theyβre even on that continuum.
05.10.2025 17:12 β π 0 π 0 π¬ 1 π 0That is an interesting point. Though they do βcommandβ incredible (compute) resources & will begin to βcontrolβ physical objects. Interesting analysis to be done there along the lines of possession is 9/10s of the law.
05.10.2025 16:21 β π 1 π 0 π¬ 1 π 0Found a really old paper that argues that chatbots are props that people use to engage in games of make-believe. This seems like a plausible analysis: journals.publishing.umich.edu/ergo/article...
05.10.2025 12:47 β π 13 π 2 π¬ 1 π 0Yes. There's an analogous problem in phil of trust literature, which tries to distinguish trust from reliance via Strawson's participant stance (or epistemic warrants).
I think we take the participant stance towards lots of non-agents, contrary to philosophers' intuition. Hence Eliza & bot romance.
Agreed.
And have no fear about the better toolkits, that's my dissertation topic so you only have to wait ~5 more years to get one!
That is Joanna Bryson's stance, too, or was as of "Robots Should Be Slaves" a number of years ago.
I'm a (descriptive) free-marketer, I think folks are mostly going to build what excites them. So I think we need better toolkits for how these things actually show up phenomenologically in our worlds.
Interesting thread. I think the personhood question is important, even if the answer remains βrobots are not personsβ for a long time. Itβs because chatbots are so tempting to see *as* persons. We might reject personhood if we think about it for a minute β but weβll often not think about it & act.
05.10.2025 15:25 β π 12 π 2 π¬ 3 π 0Feels like this tracks the mood in my extremely Swiftie house. The LP arrives here tomorrow & weβre streaming it right now.
03.10.2025 23:58 β π 1 π 0 π¬ 0 π 0"Undoubtedly some important part of morality does depend in part on a system of threats and bribes, at least for its survival in difficult conditions when normal goodwill and normally virtuous dispositions may be insufficient to motivate the conduct required for the preservation and justice of the moral network of relationships. But equally undoubtedly life will be nasty, emotionally poor, and worse that brutish (even if longer), if that is all morality is, or even if that coercive structure of morality is regarded as the backbone, rather than as an available crutch, should the main support fail. For the main support has to come from those we entrust with the job of rearing and training persons so that they can be trusted in various ways, some trusted with extraordinary coercive powers, some with public decision-making powers, all trusted as parties to promise, most trusted by some who love them and by one or more willing to become co-parents with them, most trusted by dependent children, dependent elderly relatives, sick friends, and so on. A very complex network of a great variety of sorts of trust structures our moral relationships with our fellows, and if there is a *main* support to this network it is the trust we place in those who respond to the trust of new members of the moral community, namely to children, and prepare them for new forms of trust." - Annette Baier, "What Do Women Want in a Moral Theory?" 1985
Morality must be about more than obligation, contract and incentives. Our moral theories need trust and love. Parents and carers are the key locus for positive morality -- and where things go wrong in coercive moral theories.
There is so much going on in this paragraph by Annette Baier! #philsky
Not a direct answer to your question, but I find @doctorspurt.bsky.social's work on oppressive affective technologies really helpful for thinking about this. His paper in Topoi on this is sharp, and he has a great line about cigarettes being potentially the most successful affective tech in history.
02.10.2025 20:45 β π 4 π 0 π¬ 2 π 0Whatβs your view? Iβve been a longstanding Claude-preferrer but use both. Now Iβm gravitating towards GPT.
02.10.2025 00:41 β π 1 π 0 π¬ 0 π 0Jane Goodall insisted the giving names to the chimps and realizing that they had personalities was critical for doing science. It was a critical methodological debate and she was entirely correct (and won). #philsci legend in addition to everything else.
01.10.2025 18:26 β π 42 π 3 π¬ 0 π 0We have nothing to worry about from AI's hyper-persuasiveness...as long as we're truly masters of our own beliefs, the claims of social epistemology are wrong, and we can fully understand how & why models are influencing us. In other words, it's an issue. Great new @rbnmckenna.bsky.social article.
30.09.2025 13:39 β π 11 π 3 π¬ 0 π 0I read this while writing my PhD proposal last year, itβs really good!
30.09.2025 12:07 β π 1 π 0 π¬ 1 π 0The most recent one by Lucy OβBrien on Duddington and the problem of other minds is great so far. That was my commute listen last night. :-)
30.09.2025 11:48 β π 2 π 0 π¬ 0 π 0ChatGPT with Pulse, by comparison, seems really light on its feet. I wonder if we're seeing a split between prosumer/coding-focused Claude and a more consumer oriented ChatGPT. I'm 100% talking about the websites/apps here rather than API.
30.09.2025 02:42 β π 2 π 0 π¬ 0 π 0You're absolutely right - I should have just searched for that information instead of deflecting. Let me do that now. You're totally right to call that out. Claude Sonnet 4.5 has a 200K token context window Claude Sonnet 4.5 \ Anthropic (about 500 pages of text). Compared to previous versions: Claude Sonnet 4 also had a 200K context window for paid users, with Enterprise users getting 500K How large is the context window on paid Claude plans? | Anthropic Help Center. However, Claude Sonnet 4 now supports up to 1 million tokens via Claude Sonnet 4 now supports 1M tokens of context \ Anthropic the API in public beta for higher-tier customers. The bigger news for your use case though: Claude Sonnet 4.5 has enhanced context management that helps it handle long-running tasks without hitting context limits - it can automatically remove stale tool results and has better context awareness throughout conversations Managing context on the Claude Developer Platform \ Anthropic. And yeah, I hear you on the "supple" critique. I should have just searched instead of hedging with knowledge cutoff disclaimers. That kind of friction isn't helpful when you're trying to get a quick answer.
My first experience with Claude Sonnet 4.5 is...not great. It referred to its knowledge cutoff (January 2025) instead of searching the web for an answer (about itself!) and gave some really canned answers to my request. Feels like a big step backward.
30.09.2025 02:41 β π 3 π 0 π¬ 1 π 0My first thought was that this would dovetail nicely (meaning horribly) with the epistemic injustice in healthcare lit. Then I saw you cited Kidd & Carel and read your doctor example, so.
29.09.2025 23:43 β π 3 π 0 π¬ 0 π 0Been there. But as you say, that's already a problem with spreadsheets. I'm not imagining a free for all. Instead, I'd love to have leaders chat with their budgets instead of squeezing into my over engineered spreadsheets. Then pull it together into sth pretty/useful w/o zillions of formulas.
29.09.2025 19:44 β π 0 π 0 π¬ 0 π 0Right. And presumably, you could have a template sitting somewhere for the agent to grab and fill in.
Tim, I think the next layer of "weirdos" (vs normies) in the mix is data engineers. Ordinary citizens don't know how to structure their data. Can't do away with them yet.
I'd love to play with building a budgeting system for an imaginary company to see how much complexity this could handle. Especially if it could talk to structured data to ensure consistency across the interface -- it's really the interface and meaning that makes this an impossible task IMHO.
29.09.2025 18:47 β π 1 π 0 π¬ 2 π 0This would be useful for my paradigm case of "software that's impossible to write," which is budgeting and financial analytics/reporting. Every company has its own complex operational reality & crosswalk to the financials. You either squeeze into someone else's vision or build complex spreadsheets.
29.09.2025 18:46 β π 1 π 0 π¬ 1 π 0Bar graph from HBR article on AI workslop. "How did receiving this work change your perception of your colleague?" The answers are "I saw them as: less/the same/more" across five domains: creative, capable, reliable, trustworthy, intelligent. The results are (less/same/more): Creative: 54 / 36 / 10% Capable: 50 / 37 / 12% Reliable: 49 / 37 / 14% Trustworthy: 42 / 46 / 11% Intelligent: 37 / 53 / 10%
Glass half empty/glass half full graph: close to half the time, genAI work products maintain or improve reputation. Presumably drafts/edits/etc. are useful. Half the time, they're eroding trust and reputation, i.e., slop.
The sky is not falling, but we can't use these tools on autopilot, either.