I’m sorry but no it is not “time to select your course materials for Fall 2026!!!!”
03.03.2026 17:27 — 👍 88 🔁 3 💬 5 📌 1@elisewang.bsky.social
medievalist. law & history (old) | conspiracy theories (old and new) | taiwan (new and yet to be) | chicago —> los angeles carnegie fellow ‘24-‘26, writing a book on medieval conspiracy theories elisedwang.com
I’m sorry but no it is not “time to select your course materials for Fall 2026!!!!”
03.03.2026 17:27 — 👍 88 🔁 3 💬 5 📌 1Of course! I feel like if we're going to find a conversation between AI boosterism and doomerism, we have to look at the product itself, closely, which means quoting it at length. I'd be interested to see an analysis of an actual text of an article, not just surface impressions.
03.03.2026 22:11 — 👍 1 🔁 0 💬 0 📌 0Yup! That's why it makes a good example.
03.03.2026 22:04 — 👍 3 🔁 0 💬 0 📌 0Timing! Yesterday, I wrote a thread on working with Claude on writing, and today this came out. It’s a great example of the logical dissociation that Claude produces, and it's worth a closer look.
03.03.2026 21:16 — 👍 35 🔁 12 💬 4 📌 1In my experience it is always tempted to fill in gaps. But I haven't used it on large scale data analysis because the data preparation is not doable by machine yet and once I've done the research I might as well write the thing, if that makes sense.
03.03.2026 21:57 — 👍 1 🔁 0 💬 1 📌 0If it's read its Mill, it's better than this one!
03.03.2026 21:56 — 👍 2 🔁 0 💬 0 📌 0Last thing: if I, a medieval historian, knew where this idea was from (Han) and that the Mills were quoted wrongly, I find it hard to believe that a political theorist wouldn't. Peer review might be useful after all...
03.03.2026 21:32 — 👍 15 🔁 0 💬 1 📌 0Anyway, I could go on, but I am not a social scientist and I should be doing my own writing. I genuinely don’t mean to pile on to anyone, but I think if there’s a call to talk about the claim that LLMs “write,” then we should actually dig into how writing works.
03.03.2026 21:29 — 👍 15 🔁 0 💬 1 📌 0Let us now pass to the second division of the argument, and dismissing the Supposition that any of the received opinions may be false, let us assume them to be true, and examine into the worth of the manner in which they are likely to be held, when their truth is not freely and openly canvassed. However unwillingly a person who has a strong opinion may admit the possibility that his opinion may be false, he ought to be moved by the consideration that however true it may be, if it is not fully, frequently, and fearlessly discussed, it will be held as a dead dogma, not a living truth.
The de Tocqueville quote is relevant (Han also cites him, plagiarism might be flagged), but the Mill one is not about circumscription but about competition (see context). They (John and Harriet) are worried about a lack of discussion about strongly-held beliefs, not a limiting of what is possible.
03.03.2026 21:28 — 👍 5 🔁 0 💬 1 📌 0Political theorists have spent the last decade asking whether big corporations are like mini-governments—whether your boss is a kind of dictator, whether tech companies should be democratized. That's the right instinct, but it's looking in the wrong place. The real power that Google, Meta, and OpenAl exercise isn't over your choices—it's over the conditions under which you think. They don't tell you what to believe; they shape what you encounter, what feels plausible, what questions seem worth asking, and increasingly, through generative AI, they produce the very material out of which your beliefs are formed. Tocqueville had a phrase for this: the "formidable circle drawn around thought." Mill feared the same thing-that when a society loses the friction of genuinely competing ideas, even its true beliefs decay into "dead dogma," held by rote and understood by no one. Both were responding to the communications revolutions of their own eras. Ours is more radical than anything they imagined, because for the first time in history, a handful of private companies control not just which ideas circulate, but the infrastructure of cognition itself—and they do so with no democratic mandate, minimal transparency, and almost no accountability. I call this epistemic domination, and I argue it's the single greatest untheorized threat to self-governance in the twenty-first century.
This is from a Claude pol theory article. The idea is ok, if almost an exact replica of Byung-Chul Han’s psychopolitics (“the greatest untheorized threat” should be bodied in review, along with the claim that our era is more radical than any previous one). But the text does not follow logically.
03.03.2026 21:26 — 👍 9 🔁 1 💬 1 📌 0(It was very difficult to find any actual AI product in any of these articles (which prefer to describe the results, rather than quote them), so I took the only one I could find, from Mounk)
03.03.2026 21:22 — 👍 8 🔁 0 💬 1 📌 0I don’t want to harp on one article, so I’ll look at one of the examples that Kustov cites as evidence that AI can do social science research better than most professors.
03.03.2026 21:21 — 👍 6 🔁 0 💬 1 📌 0Thinking logically, the backlash could perhaps show that “the field can’t discuss the obvious without circling the wagons”! But it certainly doesn’t prove his point, which was about which parts of papers are vestigial. Defensiveness has no bearing on the utility of peer reviews.
03.03.2026 21:20 — 👍 11 🔁 0 💬 1 📌 0Twitter post by Sean Westwood: The academic paper is a dead format walking. AI does lit reviews better. AI will do (is doing)peer review. Users will skim AI summaries. The real science is the question, the pre-analysis plan, and the analysis. The 30-page paper is just vestigial wrapping paper.
Here’s an example. The text quotes this post, and follows up: “He got roasted on Bluesky for saying this. But he’s absolutely right, and the backlash proves his point: the field can’t even discuss the obvious without circling the wagons.”
03.03.2026 21:19 — 👍 7 🔁 0 💬 2 📌 0The blog post is full of the extreme language (“dead format walking” “may not survive” “absurd double standards”), but bad writers do that too. The bigger tell for me is evidence disconnected from argument.
03.03.2026 21:17 — 👍 15 🔁 0 💬 1 📌 0To see where I'm coming from, here's yesterday's thread:
03.03.2026 21:17 — 👍 8 🔁 0 💬 1 📌 0Timing! Yesterday, I wrote a thread on working with Claude on writing, and today this came out. It’s a great example of the logical dissociation that Claude produces, and it's worth a closer look.
03.03.2026 21:16 — 👍 35 🔁 12 💬 4 📌 1well thank goodness for that
03.03.2026 19:56 — 👍 0 🔁 0 💬 0 📌 0Guys, I *specifically* became a medievalist because I'm a huge nerd who likes books and wanted to escape reality. Could we please knock it off, I'm trying to dissociate.
03.03.2026 04:34 — 👍 61 🔁 5 💬 1 📌 1I wonder about this. I think, with reading, there's reason for hope. We're pretty good at recognizing when there are humans at the other end of a line, and that frisson of connection is unmistakable. I agree that they might not be able to produce it though! That takes practice.
03.03.2026 04:31 — 👍 3 🔁 0 💬 0 📌 0I asked it to highlight the differences between two drafts and it did that fine, but MS Word will do that. I kind of liked the way it threw things out of order in my draft because organization is always hard for me and a little randomization helped me see it differently. But that might just be me!
03.03.2026 04:30 — 👍 1 🔁 0 💬 1 📌 0And communication by definition takes at least two people, living human people. So you could simulate communication with a machine, but you could never actually produce it. As with every simulacrum (see Punch the monkey) if we are desperate enough we will accept it, but we still know the difference.
03.03.2026 04:26 — 👍 4 🔁 1 💬 0 📌 0
Even if you are trying to communicate research (historical, sociological, political, whatever), the writing is more than a vehicle, it is the thing.
What we prize in writing is new thoughts, new ways to communicate, and LLMs are specifically trained to reliably reproduce old ones.
I think it might be this. With coding--correct me if I'm wrong--the idea is for the writing (the code) to solve a problem. Code can vary and even show signatures, but ultimately it is in service of the product.
With writing, writing IS the product, and it is not in service of anything else.
In talking to machine learning folks, they say it is able to restrict itself to given material, but because it is purely predictive it is not able to tell the difference between things it ought to reference and what it is allowed to make up.
03.03.2026 04:15 — 👍 2 🔁 0 💬 1 📌 0Speaking of!
03.03.2026 04:11 — 👍 45 🔁 0 💬 1 📌 0In this case I was asking it to work from documents I provided, but I've also been told in the past not to use "hallucinate" because it anthropomorphizes it
03.03.2026 04:08 — 👍 1 🔁 0 💬 1 📌 0Epoch of nonsense
03.03.2026 03:38 — 👍 16 🔁 1 💬 0 📌 0It’s not! I still feel discombobulated.
03.03.2026 03:36 — 👍 2 🔁 0 💬 0 📌 0This makes sense, given the expertise of their creators!
03.03.2026 03:02 — 👍 3 🔁 0 💬 3 📌 0