Turn off Gemini in Google!
It's been turned on by default.
Go to your Drive, click the gear icon, click settings, go to "manage apps" and uncheck that nasty "use by default" box.
www.zdnet.com/article/how-...
@tessbern.bsky.social
Postdoc @ Penn GSE | 2024 NAEd/Spencer diss fellow | science teacher educator | thinking/worrying about how edtech is reshaping classroom teaching www.tessbernhard.com
Turn off Gemini in Google!
It's been turned on by default.
Go to your Drive, click the gear icon, click settings, go to "manage apps" and uncheck that nasty "use by default" box.
www.zdnet.com/article/how-...
"Help you read faster" is so dishonest lol, like yeah if you consider "not doing the reading" as "reading faster"
20.10.2025 12:52 — 👍 37 🔁 3 💬 1 📌 0"Too many people are busily promoting a version of "‘AI’ literacy” that is simply training students how to use and consume 'AI' 'properly' – whatever that means – and refusing to admit that there may be no ethical usage of a fundamentally unethical, abusive technology," writes Audrey Watters.
08.10.2025 14:32 — 👍 24 🔁 10 💬 2 📌 2In a time when workers in higher ed need to be as organized and coordinated as ever to fight threats and build the institutions we deserve, I'm proud to announce I'll be working alongside this great crew on the newly formed RAPUP-UAW bargaining committee 🫡 pennpostdocunion.org/bargaining-c...
03.10.2025 14:20 — 👍 1 🔁 0 💬 0 📌 0The @aaup.org beat Marco Rubio in court, with a Reagan-appointed judge ruling that ideological deportations obviously violate the first amendment.
These cases cost money. And we need millions of people moving together to make them stick. If you're faculty in the US, join AAUP and join the fight.
I have a theory - much like how if you live in New York City for long enough you become culturally Jewish in a way, if you hang out online for too long you end up culturally Philadelphian
15.09.2025 01:09 — 👍 1120 🔁 123 💬 14 📌 66Alpha’s privacy policy accounts for this sort of tracking and more, claiming far more access to student information than is typical for companies selling AI to schools, including MagicSchool. Alpha can, for example, use webcams to record students, including to observe their eye contact (partly to detect engagement and environmental distractions). And it can monitor keyboard and mouse activity (to see if students are idle) and take screenshots and video of what students are seeing on-screen (in part to catch cheating). In the future, the policy notes, the school could collect data from sleep trackers or headbands worn during meditation.
This description of Alpha School really would make Foucault melt.
www.bloomberg.com/news/feature...
But what iseducation? Embodied and enactive cognitive sciences remind us that knowledge is not something discrete that an individual possesses and passes down but an inherently dynamic and evolving endeavour that develops in the process of embodied, curious and engaged interaction through dialogue. The pinnacle of cognition, particularly ‘human knowing’, is inextricably interwoven with interactions we engage in
with each other and the physical, cultural and social world we inhabit, ‘so much so that individuals are not thinkable outside of their interactions and embeddedness in their (social) world’ (De Jaegher, 2019). Dialogic models of knowledge and education emphasize that interactions between a student and teacher and/or peer provide ‘scaffolding’ for how that child understands the world. The always in flux, active and continually transforming nature of human cognition necessitates that education be fundamentally an ongoing activity. Far from the reductionist view whereby ‘formal knowledge’ can be packaged and acquired from an LLM, the classroom is an environment where love, trust, empathy, care and humility are fostered and mutually cultivated through dialogical interactions
In this short piece, I lean on embodied cog sci to argue that we should refuse & resist llms in education (pp. 53-58) unesdoc.unesco.org/in/documentV...
"the classroom is an environment where love, trust, empathy, care & humility are fostered & mutually cultivated through dialogical interactions"
3.2 We do not have to ‘embrace the future’ & we can turn back the tide It must be the sheer magnitude of [artificial neural networks’] incompetence that makes them so popular. Jerry A. Fodor (2000, p. 47) Related to the rejection of expertise is the rejection of imagining a better future and the rejection of self-determination free from industry forces (Hajer and Oomen 2025; Stengers 2018; van Rossum 2025). Not only AI enthusiasts, but even some scholars whose expertise concentrates on identifying and critically interrogating ideologies and sociotechnical relationships — such as historians and gender scholars — unfortunately fall prey to the teleological belief that AI is an unstoppable force. They embrace it because alternative responses seem too difficult, incompatible with industry developments, or non-existent. Instead of falling for this, we should “refuse [AI] adoption in schools and colleges, and reject the narrative of its inevitability.” (Reynoldson et al. 2025, n.p., also Benjamin 2016; Campolo and Crawford 2020; CDH Team and Ruddick 2025; Garcia et al. 2022; Kelly et al. 2025; Lysen and Wyatt 2024; Sano-Franchini et al. 2024; Stengers 2018). Such rejection is possible and has historical precedent, to name just a few successful examples: Amsterdammers kicked out cars, rejecting that cycling through the Dutch capital should be deadly. Organised workers died for the eight-hour workday, the weekend and other workers’ rights, and governments banned chlorofluorocarbons from fridges to mitigate ozone depletion in the atmosphere. And we know that even the tide itself famously turns back. People can undo things; and we will (cf. Albanese 2025; Boztas 2025; Kohnstamm Instituut 2025; van Laarhoven and van Vugt 2025). Besides, there will be no future to embrace if we deskill our students and selves, and allow the technology industry’s immense contributions to climate crisis
2. the strange but often repeated cultish mantra that we need to "embrace the future" — this is so bizarre given, e.g. how destructive industry forces have proven to be in science, from petroleum to tobacco to pharmaceutical companies.
(Section 3.2 here doi.org/10.5281/zeno...)
4/n
Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industry’s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.
Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).
Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.
Protecting the Ecosystem of Human Knowledge: Five Principles
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...
We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
Something I didn't get to say yesterday:
We heard over and over during the event about "human-centered" approaches to "AI". But if refusal is not on the table (at every level: individual students and teachers right up through UNESCO) then we have in fact centered the technology, not the people.
Since tomorrow is Labor Day and school is in session, I'd like to argue (again) that teaching about GenAI should include discussion of the extractive, exploitative labor conditions that make the technology possible. 🧵
01.09.2025 01:21 — 👍 82 🔁 24 💬 4 📌 1This is an inevitable byproduct of automation. The AI boosters have been selling us that deskilling in some areas will result in upskilling in others, but we have no evidence of this. We don't even have a particularly good theory. We have salesmanship B.S.
28.08.2025 20:10 — 👍 220 🔁 70 💬 6 📌 5we live in a world defined by replacing human labor (employees) with machines which then need human babysitting (independent contractors on slave wages)*
27.08.2025 21:33 — 👍 246 🔁 74 💬 10 📌 1What frightens me isn’t just that people have stopped reading. It’s that they’ve replaced reading with mimicry. A quote here, a post there, stitched together to sound like wisdom. And sadly, in this world where books gather dust and posts go viral, it’s very easy to confuse loudness for wisdom.
23.08.2025 07:33 — 👍 251 🔁 65 💬 11 📌 3Firing and demoralizing feminized jobs as enemies of the state while brazenly bribing men with violent jobs that almost instantly puts them into the middle of middle class is very basic gendered warfare. Fulfilling the manosphere’s promise.
15.08.2025 20:49 — 👍 7581 🔁 2428 💬 17 📌 58"What's propping up 'AI' is not 'the people.' It's the police. And it's the petroleum industry.
As such, when I hear educators insist that 'AI' is the future that we need to be preparing students for, I wonder why they're so willing to build a world of prisons and climate collapse."
Kind of feel like it should be bigger national news that the poorest major city in the country is being forced to accept devastating service cuts to our public transit system because PA Republicans actively hate Philly and want to cause us pain
14.08.2025 21:32 — 👍 1469 🔁 300 💬 14 📌 11EFF has been saying this for years: School monitoring software sacrifices student privacy for unproven promises of safety.
13.08.2025 20:49 — 👍 109 🔁 39 💬 2 📌 2Both helpful tidbits to consider, ty! 2/3 reviews are some of the most positive I've ever gotten, one explicitly asked for only minor revisions. The negative review asks for v addressable nuance in a few spots. I'll likely keep my pride in tact & move on, but am tempted to politely nudge the editor
11.08.2025 21:12 — 👍 0 🔁 0 💬 0 📌 0In all likelihood I will simply take the L and move on from this journal and to a new one, but my pride wants the editors to more carefully consider the extremely affirming feedback two of the reviewers provided that seem overlooked in the summary 🥲
11.08.2025 20:39 — 👍 1 🔁 0 💬 0 📌 0A very 2025 quandry: I received a rejection from a journal & the editor's summary of the reviews contains various inaccuracies, misnaming theoretical constructs and misattributing feedback to the wrong reviewers. My worry is that I got decisioned by AI. Is it petty write a note to rebut the summary?
11.08.2025 20:36 — 👍 1 🔁 0 💬 1 📌 0The answer to “how to use AI responsibly” is “don’t.”
05.08.2025 18:25 — 👍 776 🔁 234 💬 6 📌 4What’s a technology that you think is overhyped? I’m going to give a sideways answer to this, which is that the venture capital business model needs to be understood as requiring hype. You can go back to the Netscape IPO, and that was the proof point that made venture capital the financial lifeblood of the tech industry. Venture capital looks at valuations and growth, not necessarily at profit or revenue. So you don’t actually have to invest in technology that works, or that even makes a profit, you simply have to have a narrative that is compelling enough to float those valuations. So you see this repetitive and exhausting hype cycle as a feature in this industry. A couple of years ago, you would have been asking me about the metaverse, then last year, you would have asked me about Web3 and crypto, and for each of these inflection points there’s an Andreessen Horowitz manifesto. It’s not simply that one piece of technology is overhyped, it’s that hype is a necessary ingredient of the current business ecosystem of the tech industry. We should examine how often the financial incentive for hype is rewarded without any real social returns, without any meaningful progress in technology, without these tools and services and worlds ever actually manifesting. That’s key to understanding the growing chasm between the narrative of techno-optimists and the reality of our tech-encumbered world.
Stand by this: www.politico.com/newsletters/...
19.02.2025 16:42 — 👍 9775 🔁 3181 💬 162 📌 355Join the Fight The issue is not whether you personally use Microsoft CoPilot to help with slogging through emails, and it’s not about punishing students. Rather, it is about the value of your work, being paid appropriately for it, the importance of learning and intellectual curiosity, being able to have control over your working conditions, and caring about the future of participation in a democratic society.
🔥 from @hellobrittparis.bsky.social Lindsey Weinberg, and Emma May around the weaponization of AI in higher ed.
academeblog.org/2025/07/22/f...
You know what emergent tech of the last decade actually works well, and was adopted by millions?
3D printing. It’s great! So many applications.
But also nobody is sneering at you for not having or utilizing 3D printing. Nobody is trying to sneak a 3D printer into your garage without your consent
🚨WE WON OUR UNION 🚨
18.07.2025 02:12 — 👍 257 🔁 34 💬 9 📌 12Every "tech" guy is just a VC guy in a subculture that gets called "tech" for no particular reason. Most real "tech" - like, i dunno, cutting edge materials research - doesn't get called that while "a new pizza delivery app" does
10.07.2025 06:31 — 👍 2477 🔁 545 💬 36 📌 25"Men who sell machines that mimic people want us to become people who mimic machines. They want techno feudal subjects who will believe and do what they’re told. We, as people, are being strategically simplified. This is a fascist process."
organizingmythoughts.org/some-thought...
SMH: news that the American Federation of Teachers is partnering with Open AI and Microsoft on an AI initiative is especially disappointing because teachers’ unions can and should fight back against Big Tech’s attempts to deprofessionalize and disempower educators.
04.07.2025 21:00 — 👍 118 🔁 28 💬 1 📌 8