John Menton's Avatar

John Menton

@mrjmenton.bsky.social

👨‍🏫English Teacher ⚽FCBayern 📚"The Anxious Generation" 📺"House of the Dragon" 🎧"Football Weekly"

460 Followers  |  105 Following  |  1 Posts  |  Joined: 15.08.2024
Posts Following

Posts by John Menton (@mrjmenton.bsky.social)

Help Sheet: Resisting AI Mania in Schools

K-12 educators are under increasing pressure to use—and have students use—a wide range of AI tools. (The term
“AI” is used loosely here, just as it is by many purveyors and boosters.) Even those who envision benefits to schools
of this fast-evolving category of tech should approach the well-funded AI-in-education campaign with skepticism
and caution. Some of the primary arguments for teachers actively using AI tools and introducing students to AI as
early as kindergarten, however, are questionable or fallacious. What follows are four of the most common
arguments and rebuttals with links to sources. I have not attempted balance, in part because so much pro-AI
messaging is out there and discussion of risks and costs is often minimized in favor of hope or resignation. -ALF

Argument: “Schools need to prepare students for the jobs of the future.”
● The skills employers seek haven’t changed much over the decades—and include a lot of
“soft skills” like initiative, problem-solving, communication, and critical thinking.
● Early research is showing that using generative AI can degrade these key skills:
○ An MIT study showed adults using chatGPT to help write an essay “had the lowest
brain engagement and ‘consistently underperformed at neural, linguistic, and
behavioral levels.’” Critically, “ChatGPT users got lazier with each subsequent essay,
often resorting to copy-and-paste by the end of the study.”
○ A business school found those who used AI tools often had worse critical thinking
skills “mediated by increased cognitive offloading. Younger participants exhibited
higher dependence on AI tools and lower critical thinking scores.”
○ Another study revealed those using “ChatGPT engaged less in metacognitive
activities...For instance, learners in the AI group frequently looped back to ChatGPT for
feedback rather than reflecting independently. This dependency not only undermines
critical thinking but also risks long-term skill stagnati…

Help Sheet: Resisting AI Mania in Schools K-12 educators are under increasing pressure to use—and have students use—a wide range of AI tools. (The term “AI” is used loosely here, just as it is by many purveyors and boosters.) Even those who envision benefits to schools of this fast-evolving category of tech should approach the well-funded AI-in-education campaign with skepticism and caution. Some of the primary arguments for teachers actively using AI tools and introducing students to AI as early as kindergarten, however, are questionable or fallacious. What follows are four of the most common arguments and rebuttals with links to sources. I have not attempted balance, in part because so much pro-AI messaging is out there and discussion of risks and costs is often minimized in favor of hope or resignation. -ALF Argument: “Schools need to prepare students for the jobs of the future.” ● The skills employers seek haven’t changed much over the decades—and include a lot of “soft skills” like initiative, problem-solving, communication, and critical thinking. ● Early research is showing that using generative AI can degrade these key skills: ○ An MIT study showed adults using chatGPT to help write an essay “had the lowest brain engagement and ‘consistently underperformed at neural, linguistic, and behavioral levels.’” Critically, “ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.” ○ A business school found those who used AI tools often had worse critical thinking skills “mediated by increased cognitive offloading. Younger participants exhibited higher dependence on AI tools and lower critical thinking scores.” ○ Another study revealed those using “ChatGPT engaged less in metacognitive activities...For instance, learners in the AI group frequently looped back to ChatGPT for feedback rather than reflecting independently. This dependency not only undermines critical thinking but also risks long-term skill stagnati…

Argument: “AI is a tool, just like a calculator.”
● Calculators don’t provide factually wrong answers, but AI tools have. Last year, Google’s AI
search returned, among other falsehoods, that cats have gone to the moon, that Barack
Obama is Muslim, and that glue goes on pizza. Even though AI tools have and are expected to
improve, children in schools shouldn’t be used as tech firms’ guinea pigs for undertested,
unregulated products while AI firms engage elected officials in actively resisting regulation.
● Calculators don’t provide dangerous, even deadly feedback. In one study, a ”chatbot
recommended that a user, who said they were recovering from addiction, take a ‘small hit’ of
methamphetamine” because, it said, it’s “‘what makes you able to do your job to the best of
your ability.’" Users have received threatening messages from chatbots.
● Calculators don’t pose mental health risks because they aren’t potentially addictive or
designed to encourage repeated use. They don’t flatter, direct, or manipulate. Chatbots have
been designed this way—and this has led to dreadful mental health outcomes for some,
including users in a New York Times report. Alleging a chatbot encouraged their teen to die
by suicide, parents in Florida filed a lawsuit against its maker.
● Calculators don’t lie. Chatbots, however, have misled users. Writer Amanda Guinzburg
shared screenshots of interactions with one that she asked to describe several of her essays.
It spewed out invented material, showing the chatbot hadn’t actually accessed and processed
the essays. After much prodding, it “admitted” it had only acted as though it had done that
requested work, spit out mea culpas—and went on to invent or “lie” again.
● Calculators can’t be used to spread propaganda. AI tools, though, including those meant for
schools, should worry us. Law professor Eric Muller’s back-and-forth with SchoolAI’s “Anne
Frank” character showed his “helluva time trying to get her to say a bad word about Nazis.” In
thi…

Argument: “AI is a tool, just like a calculator.” ● Calculators don’t provide factually wrong answers, but AI tools have. Last year, Google’s AI search returned, among other falsehoods, that cats have gone to the moon, that Barack Obama is Muslim, and that glue goes on pizza. Even though AI tools have and are expected to improve, children in schools shouldn’t be used as tech firms’ guinea pigs for undertested, unregulated products while AI firms engage elected officials in actively resisting regulation. ● Calculators don’t provide dangerous, even deadly feedback. In one study, a ”chatbot recommended that a user, who said they were recovering from addiction, take a ‘small hit’ of methamphetamine” because, it said, it’s “‘what makes you able to do your job to the best of your ability.’" Users have received threatening messages from chatbots. ● Calculators don’t pose mental health risks because they aren’t potentially addictive or designed to encourage repeated use. They don’t flatter, direct, or manipulate. Chatbots have been designed this way—and this has led to dreadful mental health outcomes for some, including users in a New York Times report. Alleging a chatbot encouraged their teen to die by suicide, parents in Florida filed a lawsuit against its maker. ● Calculators don’t lie. Chatbots, however, have misled users. Writer Amanda Guinzburg shared screenshots of interactions with one that she asked to describe several of her essays. It spewed out invented material, showing the chatbot hadn’t actually accessed and processed the essays. After much prodding, it “admitted” it had only acted as though it had done that requested work, spit out mea culpas—and went on to invent or “lie” again. ● Calculators can’t be used to spread propaganda. AI tools, though, including those meant for schools, should worry us. Law professor Eric Muller’s back-and-forth with SchoolAI’s “Anne Frank” character showed his “helluva time trying to get her to say a bad word about Nazis.” In thi…

Argument: “AI won’t replace teachers, but it will save them time and improve their
effectiveness.”
● Adding edtech does not necessarily save teachers time. A recent study found that learning
management systems sold to schools over the past decade-plus as time-savers aren’t
delivering on making teaching easier. Instead, they found this tech (e.g. Google Classroom,
Canvas) is often burdensome and contributes to burnout. As one teacher put it, it “just adds
layers to tasks.”
● “Extra time” is rarely returned to teachers. AI proponents argue that if teachers use AI tools
to grade, prepare lessons, or differentiate materials, they’ll have more time to work with
students. But there are always new initiatives, duties, or committee assignments—the unpaid
work districts rely on—to suck up that time. In a culture of austerity and with a USDOE that is
cutting spending, teachers are likely to be assigned more students. When class sizes grow,
students get less attention, and positions can be cut.
● AI can’t replace what teachers do, but that doesn’t mean teachers won’t be replaced.
Schools are already doing it: Arizona approved a charter school in which students spend
mornings working with AI and the role of teacher is reduced to “guide.” Ed tech expert Neil
Selwyn argues those in “industry and policy circles...hostile to the idea of expensively trained
expert professional educators who have [tenure], pension rights and union protection...
[welcome] AI replacement as a way of undermining the status of the professional teacher.”
● Tech firms have been selling schools on untested products for years. Technophilia has led
to students being on screens for hours in school each week even when their phones are
banned. Writer Jess Grose explains, “Companies never had to prove that devices or software,
broadly speaking, helped students learn before those devices had wormed their way into
America’s public schools.” AI products appear to be no different.
● Efficiency is not effectiveness. “…

Argument: “AI won’t replace teachers, but it will save them time and improve their effectiveness.” ● Adding edtech does not necessarily save teachers time. A recent study found that learning management systems sold to schools over the past decade-plus as time-savers aren’t delivering on making teaching easier. Instead, they found this tech (e.g. Google Classroom, Canvas) is often burdensome and contributes to burnout. As one teacher put it, it “just adds layers to tasks.” ● “Extra time” is rarely returned to teachers. AI proponents argue that if teachers use AI tools to grade, prepare lessons, or differentiate materials, they’ll have more time to work with students. But there are always new initiatives, duties, or committee assignments—the unpaid work districts rely on—to suck up that time. In a culture of austerity and with a USDOE that is cutting spending, teachers are likely to be assigned more students. When class sizes grow, students get less attention, and positions can be cut. ● AI can’t replace what teachers do, but that doesn’t mean teachers won’t be replaced. Schools are already doing it: Arizona approved a charter school in which students spend mornings working with AI and the role of teacher is reduced to “guide.” Ed tech expert Neil Selwyn argues those in “industry and policy circles...hostile to the idea of expensively trained expert professional educators who have [tenure], pension rights and union protection... [welcome] AI replacement as a way of undermining the status of the professional teacher.” ● Tech firms have been selling schools on untested products for years. Technophilia has led to students being on screens for hours in school each week even when their phones are banned. Writer Jess Grose explains, “Companies never had to prove that devices or software, broadly speaking, helped students learn before those devices had wormed their way into America’s public schools.” AI products appear to be no different. ● Efficiency is not effectiveness. “…

Argument: “Students are already using AI, so we have to teach them ethical use.
● If schools want ethical students, teach ethics. More students are using AI tools to cheat, an
age-old problem they make much easier. This won’t be addressed by showing students how
to use this minute’s AI, an argument implying students don’t know what plagiarism is (solved
by teaching about plagiarism) or understand academic integrity (solved by teaching and
enforcing its bounds)—or that teachers create weak assignments or don’t convey purpose.
The latter aren’t solved by attempting to redirect students motivated and able to cheat.
● Students can be educated on the ethics of AI without encouraging use of AI tools. They can
be taught, as part of media literacy and social media safety programs, about AI’s potential
and applications as well as how it can enable predation, perpetuate bias, and spread
disinformation. They should be taught about the risks of AI and its various social, economic,
and environmental costs. Giving a nod to these issues while integrating AI throughout
schools sends a strong message: the schools don’t really care and neither should students.
● Children can’t be expected to use AI responsibly when adults aren’t. Many pushing schools
to embrace AI don’t know much about it. One example: Education Secretary Linda McMahon,
who said kindergartners should be taught A1 (a steak sauce). The LA Times introduced a
biased and likely politically-motivated AI feature. The Chicago Sun-Times published a
summer reading list including nonexistent books—yet teachers are told to use the same tools
to do similar work. Educators using AI to cut corners can strike students as hypocritical.
● The many costs of AI call into question the possibility of ethical AI use. These include:
○ Energy - AI data centers need huge amounts of water as coolant as well as electricity, pulling
these resources from their communities—which tend to be lower-income—straining the grid,
and raising household cos…

Argument: “Students are already using AI, so we have to teach them ethical use. ● If schools want ethical students, teach ethics. More students are using AI tools to cheat, an age-old problem they make much easier. This won’t be addressed by showing students how to use this minute’s AI, an argument implying students don’t know what plagiarism is (solved by teaching about plagiarism) or understand academic integrity (solved by teaching and enforcing its bounds)—or that teachers create weak assignments or don’t convey purpose. The latter aren’t solved by attempting to redirect students motivated and able to cheat. ● Students can be educated on the ethics of AI without encouraging use of AI tools. They can be taught, as part of media literacy and social media safety programs, about AI’s potential and applications as well as how it can enable predation, perpetuate bias, and spread disinformation. They should be taught about the risks of AI and its various social, economic, and environmental costs. Giving a nod to these issues while integrating AI throughout schools sends a strong message: the schools don’t really care and neither should students. ● Children can’t be expected to use AI responsibly when adults aren’t. Many pushing schools to embrace AI don’t know much about it. One example: Education Secretary Linda McMahon, who said kindergartners should be taught A1 (a steak sauce). The LA Times introduced a biased and likely politically-motivated AI feature. The Chicago Sun-Times published a summer reading list including nonexistent books—yet teachers are told to use the same tools to do similar work. Educators using AI to cut corners can strike students as hypocritical. ● The many costs of AI call into question the possibility of ethical AI use. These include: ○ Energy - AI data centers need huge amounts of water as coolant as well as electricity, pulling these resources from their communities—which tend to be lower-income—straining the grid, and raising household cos…

I put together a 4-page doc for those wary of the rush to integrate in K-12 schools (though much applies beyond).

Four of the main arguments for teachers using AI tools & introducing kids to AI as early as kindergarten are addressed with rebuttals linked to sources.

25.06.2025 09:45 — 👍 310    🔁 118    💬 22    📌 31

Julian @sccenglish.bsky.social has compiled a starter pack for Irish teachers. go.bsky.app/BqbTnC7

17.11.2024 09:51 — 👍 14    🔁 3    💬 5    📌 0

This platform is such a breath of fresh air!

Finishing off Haidt's "The Anxious Generation" this morning ... incredibly important for teachers, parents and teens themselves.

16.11.2024 10:42 — 👍 2    🔁 1    💬 0    📌 0

Hello all at Bluesky! We are Kennys Bookshop - an independent bookshop based in Galway, Ireland where we've been selling books since 1940, a family business, a bricks and mortar shop & an online shop (www.kennys.ie), a shop with new and secondhand books & a place passionate about all things books! 👋

14.11.2024 20:41 — 👍 300    🔁 80    💬 13    📌 24
Post image Post image

A psychoanalytical reading of Macbeth

16.08.2024 09:37 — 👍 2    🔁 1    💬 1    📌 1