It's remarkable some people don't believe deception is harmful, even if the deception isn't discovered. Furthermore, deception harms the deceiver by making the kind of person who deceives.
24.10.2025 09:29 β π 1 π 0 π¬ 0 π 0@jbriscoe.bsky.social
Hospice and Palliative Care Physician #MedPsych #MedSky #HAPC #Bioethics Writing @ Notes from a Family Meeting: https://familymeetingnotes.substack.com
It's remarkable some people don't believe deception is harmful, even if the deception isn't discovered. Furthermore, deception harms the deceiver by making the kind of person who deceives.
24.10.2025 09:29 β π 1 π 0 π¬ 0 π 0The PPP is grounded on the presumption that a patient with the capacity to make a decision is given information and they yield a response. It treats the patient as a machine, which is why we believe a machine could so easily replace them.2 But thatβs not how decision-making works. Itβs laden with emotion and values. Itβs fraught with negotiation. Most people arenβt sitting at home pondering their...
familymeetingnotes.substack.com/p/a-machine-... Some thoughts on using #AI for surrogate decision-making.
22.10.2025 09:30 β π 1 π 1 π¬ 0 π 0I've never said yes to an interview so quickly as when @ashleybelanger.bsky.social reached out to discuss a topic that β it will surprise no one to learn β I feel pretty strongly about.
20.10.2025 15:52 β π 17 π 5 π¬ 5 π 1Bioethicists have been debating the idea for a while, e.g., pubmed.ncbi.nlm.nih.gov/24556152/. Now others are trying to empirically validate the concept. I think it is a case where bioethics for some is really just figuring out how to use technology to support a single bioethical principle.
21.10.2025 09:12 β π 2 π 0 π¬ 0 π 0In one sense, AI surrogate decision-makers are the right answer to the wrong question: familymeetingnotes.substack.com/p/a-machine-... #hapc #ai #medsky
20.10.2025 23:15 β π 1 π 0 π¬ 0 π 0A response in narrative: familymeetingnotes.substack.com/p/nothing-be... #hapc #medsky #ai
20.10.2025 23:15 β π 0 π 0 π¬ 1 π 0From @arstechnica.com: arstechnica.com/features/202... My answer, as a palliative care physician: never.
20.10.2025 23:15 β π 1 π 0 π¬ 1 π 0this was such an informative read. @emilymoin.com is really good at breaking down the problems here
20.10.2025 16:02 β π 27 π 10 π¬ 3 π 0Looking Again at SSRIs in Adolescent Depression and Anxiety
βSociety gets the type of adolescent that it expects and deserves.β
A response of sorts to @ploederl.bsky.socialβs recent blogpost
www.psychiatrymargins.com/p/looking-ag...
So maybe 15 years from now none of this will matter (though I'm skeptical) -- but in the meantime, I think I stand by what I told Dhruv in that story, if we (meaning educators) don't figure out how to train the next gen in this environment, we're all screwed.
17.10.2025 13:19 β π 3 π 1 π¬ 1 π 0My biggest concern about this technology in the short (and, well, medium)-term is what is does to US, especially the current generation of doctors in training. Even the difference between my third year and first-year residents is pretty stark in their use of AI tools (especially Open Evidence)
17.10.2025 13:14 β π 6 π 2 π¬ 1 π 0...medicine only has a modest influence over whether someone has a good death... familymeetingnotes.substack.com/p/whats-a-go... #medsky #hapc
17.10.2025 09:40 β π 1 π 0 π¬ 0 π 0Page 1 of the editorial "Machine Learning Cannot Replace Surrogate Decision-Makers in Resuscitation Decisions for Incapacitated Patients" Bottom: Read full article at ai.nejm.org.
Editorial by Robert D. Truog, MD, MA, and R. Sean Morrison, MD: Machine Learning Cannot Replace Surrogate Decision-Makers in Resuscitation Decisions for Incapacitated Patients nejm.ai/3Vysymr
#AI #MedSky #MLSky
I don't discount the possibility that AI might be helpful in a number of different ways, but there's very rarely an unmitigated good in health care. There's almost always a trade-off, and we should consider that instead of writing advertisements for journals.
14.10.2025 09:51 β π 0 π 0 π¬ 0 π 0I'm not wholly against using AI in some settings (e.g., I use OpenEvidence occasionally), but we need to count the cost of its use. We can learn from history to discern how AI might impact us socially, existentially, and, yes, philosophically. #medsky #ai
14.10.2025 09:27 β π 0 π 0 π¬ 0 π 0These authors come close to highlighting lessons from our experience with EMRs. It's like their hope in AI overwhelms their realistic appraisal of how the EMR and AI are alike in that they're both forms of technology which shape us. web.cs.ucdavis.edu/~rogaway/cla...
14.10.2025 09:24 β π 0 π 0 π¬ 1 π 0If the relationship fundamentally changes, it's because one or both parties in the relationship are dehumanized by the ever-increasing mechanization of the clinical encounter. familymeetingnotes.substack.com/p/my-machine...
14.10.2025 09:24 β π 1 π 0 π¬ 1 π 04. "Expect the physician-patient relationship to evolve." This prediction is grounded on a wholly technical understanding of that relationship in which the clinician is responsible for providing services to a customer. This relationship is about a person in need seeking help from another human.
14.10.2025 09:24 β π 0 π 0 π¬ 1 π 03. The fatalism is obvious. They draw the parallel w/ EMRs but don't draw out the implication: "Just as the electronic health record forced clinicians to spend more time at computers, GenAI will alter how clinicians work." What will #AI force us to do? familymeetingnotes.substack.com/p/where-the-...
14.10.2025 09:24 β π 0 π 0 π¬ 1 π 02. Overlook that clinicians must have certain qualities in order to use tools well. It's possible to over-jig health care. familymeetingnotes.substack.com/p/crafting-h...
14.10.2025 09:24 β π 0 π 0 π¬ 1 π 01. Fit the clinician to the tool (EMR - e.g., make them navigate labyrinthine menus; AI - prompt appropriately).
14.10.2025 09:24 β π 0 π 0 π¬ 1 π 0It's remarkable to see the same mistakes we've made with the EMR be re-made with #AI, as suggested in this article from @jamainternalmed.com jamanetwork.com/journals/jam... #MedSky
14.10.2025 09:24 β π 1 π 0 π¬ 3 π 0mbird.com/suffering/re... from Aaron McKethan at Mockingbird
11.10.2025 00:09 β π 1 π 0 π¬ 0 π 0I reflect on Frankenstein more than the Kreb cycle. I wonder what that says about #meded. Or me. podcasts.apple.com/us/podcast/p...
10.10.2025 11:21 β π 0 π 0 π¬ 0 π 0I have expertise in both the technical and ethical aspects of clinical AI, so one of my most frequent refrains is that we need to separately address questions of "can we?" and "should we?"
Anyway, absolutely not.
Our greatest concern as clinicians is that the data presented in this article will be interpreted by the media and other readers as evidence that ML can replace surrogates in end-of-life decision-making. βMachine Learning Cannot Replace Surrogate Decision-Makers in Resuscitation Decisions for Incapacitated Patientsβ by Robert D. Truog, M.D., M.A., and R. Sean Morrison, M.D.
Although ML has been shown to outperform surrogate decision-makers in predicting patient preferences for CPR, the authors of a new editorial argue that ML cannot replace surrogates in making decisions for incapacitated patients. nejm.ai/3Vysymr
#AI #MedSky #MLSky
The sad thing about this is clinicians are so beleaguered by the burdens of the EMR, they'll take any port in a storm without fully appreciating the hidden costs. It looks very official to study things like data privacy but overlook the surveillance risks. #ai #medsky
04.10.2025 10:46 β π 4 π 0 π¬ 1 π 0The evidence I've seen suggests patients do not like their clinicians typing away looking at a computer screen as they attempt to share sensitive details about their health. Presumably ambient AI scribing might fix this, but I have my doubts because clinicians have many reasons not to listen well.
03.10.2025 18:06 β π 1 π 0 π¬ 1 π 0Iβm amazed anyone could believe that once an AI scribe saves a clinician time, administrators will leave the reclaimed time untouched. If a clinician could see 14-16 patients in a day without an AI scribe (already a number too high for primary care and most sub-specialties), surely they could see 20-25 patients with an AI scribe. The encounters become shorter because what is expected isnβt human connection but a technical service rendered ever more efficiently by other tools in the AI suite. That reclaimed, unbillable time shouldnβt be given back to those 14-16 patients because presumably those encounters are already optimized. It should be given to 6-9 other patients who can be seen today instead of next week. This isnβt good for the clinicians but the encounter isnβt about them. They are parts in the machine, and may soon be expunged by AI anyway. Ironically, neither is it good for the patient who finds themselves treated as a bureaucratic client, a machine, or an animal, depending on the flavor of dehumanization their condition warrants.
03.10.2025 09:24 β π 0 π 0 π¬ 1 π 0