Help Sheet: Resisting AI Mania in Schools K-12 educators are under increasing pressure to useâand have students useâa wide range of AI tools. (The term âAIâ is used loosely here, just as it is by many purveyors and boosters.) Even those who envision benefits to schools of this fast-evolving category of tech should approach the well-funded AI-in-education campaign with skepticism and caution. Some of the primary arguments for teachers actively using AI tools and introducing students to AI as early as kindergarten, however, are questionable or fallacious. What follows are four of the most common arguments and rebuttals with links to sources. I have not attempted balance, in part because so much pro-AI messaging is out there and discussion of risks and costs is often minimized in favor of hope or resignation. -ALF Argument: âSchools need to prepare students for the jobs of the future.â â The skills employers seek havenât changed much over the decadesâand include a lot of âsoft skillsâ like initiative, problem-solving, communication, and critical thinking. â Early research is showing that using generative AI can degrade these key skills: â An MIT study showed adults using chatGPT to help write an essay âhad the lowest brain engagement and âconsistently underperformed at neural, linguistic, and behavioral levels.ââ Critically, âChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.â â A business school found those who used AI tools often had worse critical thinking skills âmediated by increased cognitive offloading. Younger participants exhibited higher dependence on AI tools and lower critical thinking scores.â â Another study revealed those using âChatGPT engaged less in metacognitive activities...For instance, learners in the AI group frequently looped back to ChatGPT for feedback rather than reflecting independently. This dependency not only undermines critical thinking but also risks long-term skill stagnatiâŚ
Argument: âAI is a tool, just like a calculator.â â Calculators donât provide factually wrong answers, but AI tools have. Last year, Googleâs AI search returned, among other falsehoods, that cats have gone to the moon, that Barack Obama is Muslim, and that glue goes on pizza. Even though AI tools have and are expected to improve, children in schools shouldnât be used as tech firmsâ guinea pigs for undertested, unregulated products while AI firms engage elected officials in actively resisting regulation. â Calculators donât provide dangerous, even deadly feedback. In one study, a âchatbot recommended that a user, who said they were recovering from addiction, take a âsmall hitâ of methamphetamineâ because, it said, itâs ââwhat makes you able to do your job to the best of your ability.â" Users have received threatening messages from chatbots. â Calculators donât pose mental health risks because they arenât potentially addictive or designed to encourage repeated use. They donât flatter, direct, or manipulate. Chatbots have been designed this wayâand this has led to dreadful mental health outcomes for some, including users in a New York Times report. Alleging a chatbot encouraged their teen to die by suicide, parents in Florida filed a lawsuit against its maker. â Calculators donât lie. Chatbots, however, have misled users. Writer Amanda Guinzburg shared screenshots of interactions with one that she asked to describe several of her essays. It spewed out invented material, showing the chatbot hadnât actually accessed and processed the essays. After much prodding, it âadmittedâ it had only acted as though it had done that requested work, spit out mea culpasâand went on to invent or âlieâ again. â Calculators canât be used to spread propaganda. AI tools, though, including those meant for schools, should worry us. Law professor Eric Mullerâs back-and-forth with SchoolAIâs âAnne Frankâ character showed his âhelluva time trying to get her to say a bad word about Nazis.â In thiâŚ
Argument: âAI wonât replace teachers, but it will save them time and improve their effectiveness.â â Adding edtech does not necessarily save teachers time. A recent study found that learning management systems sold to schools over the past decade-plus as time-savers arenât delivering on making teaching easier. Instead, they found this tech (e.g. Google Classroom, Canvas) is often burdensome and contributes to burnout. As one teacher put it, it âjust adds layers to tasks.â â âExtra timeâ is rarely returned to teachers. AI proponents argue that if teachers use AI tools to grade, prepare lessons, or differentiate materials, theyâll have more time to work with students. But there are always new initiatives, duties, or committee assignmentsâthe unpaid work districts rely onâto suck up that time. In a culture of austerity and with a USDOE that is cutting spending, teachers are likely to be assigned more students. When class sizes grow, students get less attention, and positions can be cut. â AI canât replace what teachers do, but that doesnât mean teachers wonât be replaced. Schools are already doing it: Arizona approved a charter school in which students spend mornings working with AI and the role of teacher is reduced to âguide.â Ed tech expert Neil Selwyn argues those in âindustry and policy circles...hostile to the idea of expensively trained expert professional educators who have [tenure], pension rights and union protection... [welcome] AI replacement as a way of undermining the status of the professional teacher.â â Tech firms have been selling schools on untested products for years. Technophilia has led to students being on screens for hours in school each week even when their phones are banned. Writer Jess Grose explains, âCompanies never had to prove that devices or software, broadly speaking, helped students learn before those devices had wormed their way into Americaâs public schools.â AI products appear to be no different. â Efficiency is not effectiveness. ââŚ
Argument: âStudents are already using AI, so we have to teach them ethical use. â If schools want ethical students, teach ethics. More students are using AI tools to cheat, an age-old problem they make much easier. This wonât be addressed by showing students how to use this minuteâs AI, an argument implying students donât know what plagiarism is (solved by teaching about plagiarism) or understand academic integrity (solved by teaching and enforcing its bounds)âor that teachers create weak assignments or donât convey purpose. The latter arenât solved by attempting to redirect students motivated and able to cheat. â Students can be educated on the ethics of AI without encouraging use of AI tools. They can be taught, as part of media literacy and social media safety programs, about AIâs potential and applications as well as how it can enable predation, perpetuate bias, and spread disinformation. They should be taught about the risks of AI and its various social, economic, and environmental costs. Giving a nod to these issues while integrating AI throughout schools sends a strong message: the schools donât really care and neither should students. â Children canât be expected to use AI responsibly when adults arenât. Many pushing schools to embrace AI donât know much about it. One example: Education Secretary Linda McMahon, who said kindergartners should be taught A1 (a steak sauce). The LA Times introduced a biased and likely politically-motivated AI feature. The Chicago Sun-Times published a summer reading list including nonexistent booksâyet teachers are told to use the same tools to do similar work. Educators using AI to cut corners can strike students as hypocritical. â The many costs of AI call into question the possibility of ethical AI use. These include: â Energy - AI data centers need huge amounts of water as coolant as well as electricity, pulling these resources from their communitiesâwhich tend to be lower-incomeâstraining the grid, and raising household cosâŚ
I put together a 4-page doc for those wary of the rush to integrate in K-12 schools (though much applies beyond).
Four of the main arguments for teachers using AI tools & introducing kids to AI as early as kindergarten are addressed with rebuttals linked to sources.