“We’ve lost the ability to love people because we litmus test them at every point. . .” ibid
overcast.fm/+AA3KNhgbFXg...
@budtheteacher.com.bsky.social
I’m learning.
“We’ve lost the ability to love people because we litmus test them at every point. . .” ibid
overcast.fm/+AA3KNhgbFXg...
“I love a good argument. But I also love grace.” 
A fine framing of public discourse from Jon Stewart interviewed by David Remnick on the New Yorker Radio Hour. 
overcast.fm/+AA3KNhgbFXg...
If somebody suggests you watch _The Life of Chuck_, you should listen to them.
28.09.2025 23:16 — 👍 1 🔁 0 💬 1 📌 0First day of school here. 
A good moment for an oldie - my advice to teachers at the start of a new year. Still advice I would give. Especially now. 
Happy new year!
budtheteacher.com/blog/2008/08...
Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens. www.nytimes.com/2025/08/08/t...
08.08.2025 23:02 — 👍 0 🔁 0 💬 0 📌 0The assumption that generative AI could be a "valuable partner" is unevidenced and the example activity is critical thinking work that could better be done in the absence of AI. It's thinking of something you COULD do with AI. Rather than what students SHOULD do to learn.
05.08.2025 18:06 — 👍 180 🔁 38 💬 6 📌 9Help Sheet: Resisting AI Mania in Schools K-12 educators are under increasing pressure to use—and have students use—a wide range of AI tools. (The term “AI” is used loosely here, just as it is by many purveyors and boosters.) Even those who envision benefits to schools of this fast-evolving category of tech should approach the well-funded AI-in-education campaign with skepticism and caution. Some of the primary arguments for teachers actively using AI tools and introducing students to AI as early as kindergarten, however, are questionable or fallacious. What follows are four of the most common arguments and rebuttals with links to sources. I have not attempted balance, in part because so much pro-AI messaging is out there and discussion of risks and costs is often minimized in favor of hope or resignation. -ALF Argument: “Schools need to prepare students for the jobs of the future.” ● The skills employers seek haven’t changed much over the decades—and include a lot of “soft skills” like initiative, problem-solving, communication, and critical thinking. ● Early research is showing that using generative AI can degrade these key skills: ○ An MIT study showed adults using chatGPT to help write an essay “had the lowest brain engagement and ‘consistently underperformed at neural, linguistic, and behavioral levels.’” Critically, “ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.” ○ A business school found those who used AI tools often had worse critical thinking skills “mediated by increased cognitive offloading. Younger participants exhibited higher dependence on AI tools and lower critical thinking scores.” ○ Another study revealed those using “ChatGPT engaged less in metacognitive activities...For instance, learners in the AI group frequently looped back to ChatGPT for feedback rather than reflecting independently. This dependency not only undermines critical thinking but also risks long-term skill stagnati…
Argument: “AI is a tool, just like a calculator.” ● Calculators don’t provide factually wrong answers, but AI tools have. Last year, Google’s AI search returned, among other falsehoods, that cats have gone to the moon, that Barack Obama is Muslim, and that glue goes on pizza. Even though AI tools have and are expected to improve, children in schools shouldn’t be used as tech firms’ guinea pigs for undertested, unregulated products while AI firms engage elected officials in actively resisting regulation. ● Calculators don’t provide dangerous, even deadly feedback. In one study, a ”chatbot recommended that a user, who said they were recovering from addiction, take a ‘small hit’ of methamphetamine” because, it said, it’s “‘what makes you able to do your job to the best of your ability.’" Users have received threatening messages from chatbots. ● Calculators don’t pose mental health risks because they aren’t potentially addictive or designed to encourage repeated use. They don’t flatter, direct, or manipulate. Chatbots have been designed this way—and this has led to dreadful mental health outcomes for some, including users in a New York Times report. Alleging a chatbot encouraged their teen to die by suicide, parents in Florida filed a lawsuit against its maker. ● Calculators don’t lie. Chatbots, however, have misled users. Writer Amanda Guinzburg shared screenshots of interactions with one that she asked to describe several of her essays. It spewed out invented material, showing the chatbot hadn’t actually accessed and processed the essays. After much prodding, it “admitted” it had only acted as though it had done that requested work, spit out mea culpas—and went on to invent or “lie” again. ● Calculators can’t be used to spread propaganda. AI tools, though, including those meant for schools, should worry us. Law professor Eric Muller’s back-and-forth with SchoolAI’s “Anne Frank” character showed his “helluva time trying to get her to say a bad word about Nazis.” In thi…
Argument: “AI won’t replace teachers, but it will save them time and improve their effectiveness.” ● Adding edtech does not necessarily save teachers time. A recent study found that learning management systems sold to schools over the past decade-plus as time-savers aren’t delivering on making teaching easier. Instead, they found this tech (e.g. Google Classroom, Canvas) is often burdensome and contributes to burnout. As one teacher put it, it “just adds layers to tasks.” ● “Extra time” is rarely returned to teachers. AI proponents argue that if teachers use AI tools to grade, prepare lessons, or differentiate materials, they’ll have more time to work with students. But there are always new initiatives, duties, or committee assignments—the unpaid work districts rely on—to suck up that time. In a culture of austerity and with a USDOE that is cutting spending, teachers are likely to be assigned more students. When class sizes grow, students get less attention, and positions can be cut. ● AI can’t replace what teachers do, but that doesn’t mean teachers won’t be replaced. Schools are already doing it: Arizona approved a charter school in which students spend mornings working with AI and the role of teacher is reduced to “guide.” Ed tech expert Neil Selwyn argues those in “industry and policy circles...hostile to the idea of expensively trained expert professional educators who have [tenure], pension rights and union protection... [welcome] AI replacement as a way of undermining the status of the professional teacher.” ● Tech firms have been selling schools on untested products for years. Technophilia has led to students being on screens for hours in school each week even when their phones are banned. Writer Jess Grose explains, “Companies never had to prove that devices or software, broadly speaking, helped students learn before those devices had wormed their way into America’s public schools.” AI products appear to be no different. ● Efficiency is not effectiveness. “…
Argument: “Students are already using AI, so we have to teach them ethical use. ● If schools want ethical students, teach ethics. More students are using AI tools to cheat, an age-old problem they make much easier. This won’t be addressed by showing students how to use this minute’s AI, an argument implying students don’t know what plagiarism is (solved by teaching about plagiarism) or understand academic integrity (solved by teaching and enforcing its bounds)—or that teachers create weak assignments or don’t convey purpose. The latter aren’t solved by attempting to redirect students motivated and able to cheat. ● Students can be educated on the ethics of AI without encouraging use of AI tools. They can be taught, as part of media literacy and social media safety programs, about AI’s potential and applications as well as how it can enable predation, perpetuate bias, and spread disinformation. They should be taught about the risks of AI and its various social, economic, and environmental costs. Giving a nod to these issues while integrating AI throughout schools sends a strong message: the schools don’t really care and neither should students. ● Children can’t be expected to use AI responsibly when adults aren’t. Many pushing schools to embrace AI don’t know much about it. One example: Education Secretary Linda McMahon, who said kindergartners should be taught A1 (a steak sauce). The LA Times introduced a biased and likely politically-motivated AI feature. The Chicago Sun-Times published a summer reading list including nonexistent books—yet teachers are told to use the same tools to do similar work. Educators using AI to cut corners can strike students as hypocritical. ● The many costs of AI call into question the possibility of ethical AI use. These include: ○ Energy - AI data centers need huge amounts of water as coolant as well as electricity, pulling these resources from their communities—which tend to be lower-income—straining the grid, and raising household cos…
I put together a 4-page doc for those wary of the rush to integrate in K-12 schools (though much applies beyond). 
Four of the main arguments for teachers using AI tools & introducing kids to AI as early as kindergarten are addressed with rebuttals linked to sources.
The first:
27.06.2025 11:35 — 👍 9 🔁 3 💬 1 📌 0If you were thinking about rereading the Constitution with your book club, but you’re maybe in between book clubs, this podcast is for you. Highly recommend. 
overcast.fm/+AAyIOyIrdZo
Kicking off the 2025 CASE Convention. Learning with Colorado colleagues and educational leaders.
23.07.2025 14:08 — 👍 0 🔁 0 💬 0 📌 0I dunno. Maybe it’s not the best use of time to mock or parody or amplify the Coldplay video thing.
22.07.2025 15:41 — 👍 0 🔁 0 💬 0 📌 0This morning’s beach read. Catching up on some back issues.
17.07.2025 15:24 — 👍 0 🔁 0 💬 0 📌 0Getting to the fun faster, perhaps.
16.07.2025 20:45 — 👍 0 🔁 0 💬 1 📌 0It feels like lots of what I’m seeing.
16.07.2025 20:32 — 👍 0 🔁 0 💬 1 📌 0“We ought to think about A.I. as an entertainment system before anything else.” 
www.nytimes.com/2025/07/16/o...
“Poetry and art in general can be this amazing connective tool. . . . It engenders empathy. And sometimes I can forget this, but adding beauty to the world is a thing unto itself. We were born astonished. We should never grow out of our astonishment.”
www.nytimes.com/2025/07/15/a...
The future is weird and uncomfortable.
07.07.2025 13:56 — 👍 0 🔁 0 💬 0 📌 0Absolutely.
06.07.2025 22:18 — 👍 1 🔁 0 💬 1 📌 0This book is well worth your time. And the time of anyone suggesting to you that “this time, it’s different” when they tell you to believe everything about the future of “AI.”
thecon.ai
People make choices. And the rules that look like choices that LLMs follow. 
Computers don’t make choices. 
But writers do.
Does “AI” have “diction?”
That suggests agency. LLMs aren’t agentic.
I think it’s great that we’re talking about how words get put together and the stats of such things. 
Language matters. Even when the robots use it.
Indiana Jones and the Dryer of Single Socks
03.07.2025 00:21 — 👍 2 🔁 0 💬 0 📌 0Writing is hard work. Especially for scientists. 
www.nytimes.com/2025/07/02/h...
Building customized LLM agents in a Microsoft tool on iPadOS via the Chrome web browser. 
The future is weird. 
#istelive25
As usual, @hickstro is doing good work on how to teach writing with technology. 
These resources on AI and writing instruction are well worth your time. 
docs.google.com/presentation...
Okay. There was a polite greeting.
30.06.2025 14:34 — 👍 1 🔁 0 💬 1 📌 0Saw him yesterday, but we kept our hands to ourselves, as gentlemen do.
30.06.2025 14:33 — 👍 0 🔁 0 💬 2 📌 0I’ve never let my lack of popularity interfere with the inflation of my ego.
30.06.2025 14:30 — 👍 0 🔁 0 💬 0 📌 0