autonomous vehicles are an abject lesson about the impossibly complex challenge of trying to effectively navigate a dynamic natural world automatically. the entirety of the field's scholarly & engineering work might be most valuable as case studies in why uncritical automation is a fraught endeavor.
14.01.2026 15:31 โ
๐ 57
๐ 12
๐ฌ 3
๐ 1
for the love of god, AI systems aren't "just tools that are neither good nor evil in and of themselves". AI systems *are* tools of capitalism. they exist as tools of capitalism. there is no AI system "in and of itself" outside of capitalism
29.12.2025 16:24 โ
๐ 2709
๐ 639
๐ฌ 11
๐ 73
"This promise of an AI future, is really just a collective anxiety that wealthy people have about how well they're gonna be able to control us in the future."
- @tressiemcphd.bsky.social with an absolute mic drop moment about AI bullshit.
Incredible words.
Listen to all of it!
19.12.2025 12:25 โ
๐ 8802
๐ 4077
๐ฌ 86
๐ 443
The "evidence-based" policy model is being abandoned in favour getting AI into schools for "testing in the wild" and that's giving big tech and edtech freedom to pump private tech into public education even harder than before. The "evidence" will only come later, once they're embedded.
29.11.2025 01:14 โ
๐ 15
๐ 3
๐ฌ 1
๐ 0
I don't understand how anyone can watch how blatantly Grok is manipulated to answer the way ownership desires it to and then act like the other LLM chatbots couldn't possibly be similarly but less obviously compromised to produce responses in whatever way corporate interests and priorities dictate.
23.11.2025 19:13 โ
๐ 5998
๐ 1739
๐ฌ 49
๐ 111
Screenshot from linked article:
Learning is done best in community. The most important function of educational institutions is providing spaces for building that community. Every time we suggest a student would be well served by โasking a chatbot,โ we are cutting off an opportunity for that student to engage with their classmates, instructors, or librarians. Those interactions, however small, are what constitute communities of learners.
Yes! This also comes up in my recent contribution to CHE:
www.chronicle.com/article/how-...
23.11.2025 14:40 โ
๐ 113
๐ 38
๐ฌ 2
๐ 3
Journalist challenge: Use โMachine Learningโ when you mean machine learning and โLLMโ when you mean LLM. Ditch โAIโ as a catch-all term, itโs not useful for readers and it helps companies trying to confuse the public by obscuring the roles played by different technologies. ๐งช
22.11.2025 16:50 โ
๐ 3711
๐ 1373
๐ฌ 58
๐ 128
How to turn off Gemini in your Gmail, Docs, Photos, and more - it's easy to opt out
It's a little hidden, but there is a way to remove Gemini from your favorite Google services.
Turn off Gemini in Google!
It's been turned on by default.
Go to your Drive, click the gear icon, click settings, go to "manage apps" and uncheck that nasty "use by default" box.
www.zdnet.com/article/how-...
27.10.2025 19:10 โ
๐ 1002
๐ 701
๐ฌ 4
๐ 14
"Help you read faster" is so dishonest lol, like yeah if you consider "not doing the reading" as "reading faster"
20.10.2025 12:52 โ
๐ 37
๐ 3
๐ฌ 1
๐ 0
Without Our Consent
When I wrote last weekโs round-up of โAIโ-related news, I didnโt include any of OpenAIโs product releases, mostly because itโs 2025 and Iโm exhausted by this game that tech companies and tech journali...
"Too many people are busily promoting a version of "โAIโ literacyโ that is simply training students how to use and consume 'AI' 'properly' โ whatever that means โ and refusing to admit that there may be no ethical usage of a fundamentally unethical, abusive technology," writes Audrey Watters.
08.10.2025 14:32 โ
๐ 24
๐ 10
๐ฌ 2
๐ 2
Bargaining Committee | Penn Postdoc Union
In a time when workers in higher ed need to be as organized and coordinated as ever to fight threats and build the institutions we deserve, I'm proud to announce I'll be working alongside this great crew on the newly formed RAPUP-UAW bargaining committee ๐ซก pennpostdocunion.org/bargaining-c...
03.10.2025 14:20 โ
๐ 1
๐ 0
๐ฌ 0
๐ 0
Join
Joining the AAUP says that youโre concerned about academic freedom, and about the way that basic freedom protects your teaching and research. Join today.
The @aaup.org beat Marco Rubio in court, with a Reagan-appointed judge ruling that ideological deportations obviously violate the first amendment.
These cases cost money. And we need millions of people moving together to make them stick. If you're faculty in the US, join AAUP and join the fight.
01.10.2025 15:11 โ
๐ 135
๐ 41
๐ฌ 1
๐ 1
I have a theory - much like how if you live in New York City for long enough you become culturally Jewish in a way, if you hang out online for too long you end up culturally Philadelphian
15.09.2025 01:09 โ
๐ 1112
๐ 122
๐ฌ 14
๐ 65
Alphaโs privacy policy accounts for this sort of tracking and more, claiming far more access to student information than is typical for companies selling AI to schools, including MagicSchool. Alpha can, for example, use webcams to record students, including to observe their eye contact (partly to detect engagement and environmental distractions). And it can monitor keyboard and mouse activity (to see if students are idle) and take screenshots and video of what students are seeing on-screen (in part to catch cheating). In the future, the policy notes, the school could collect data from sleep trackers or headbands worn during meditation.
This description of Alpha School really would make Foucault melt.
www.bloomberg.com/news/feature...
08.09.2025 16:03 โ
๐ 12
๐ 5
๐ฌ 0
๐ 0
But what iseducation? Embodied and enactive cognitive sciences remind us that knowledge is not something discrete that an individual possesses and passes down but an inherently dynamic and evolving endeavour that develops in the process of embodied, curious and engaged interaction through dialogue. The pinnacle of cognition, particularly โhuman knowingโ, is inextricably interwoven with interactions we engage in
with each other and the physical, cultural and social world we inhabit, โso much so that individuals are not thinkable outside of their interactions and embeddedness in their (social) worldโ (De Jaegher, 2019). Dialogic models of knowledge and education emphasize that interactions between a student and teacher and/or peer provide โscaffoldingโ for how that child understands the world. The always in flux, active and continually transforming nature of human cognition necessitates that education be fundamentally an ongoing activity. Far from the reductionist view whereby โformal knowledgeโ can be packaged and acquired from an LLM, the classroom is an environment where love, trust, empathy, care and humility are fostered and mutually cultivated through dialogical interactions
In this short piece, I lean on embodied cog sci to argue that we should refuse & resist llms in education (pp. 53-58) unesdoc.unesco.org/in/documentV...
"the classroom is an environment where love, trust, empathy, care & humility are fostered & mutually cultivated through dialogical interactions"
07.09.2025 10:06 โ
๐ 568
๐ 223
๐ฌ 18
๐ 27
3.2 We do not have to โembrace the futureโ & we can turn back the tide
It must be the sheer magnitude of [artificial neural networksโ] incompetence that makes
them so popular.
Jerry A. Fodor (2000, p. 47)
Related to the rejection of expertise is the rejection of imagining a better future and the rejection
of self-determination free from industry forces (Hajer and Oomen 2025; Stengers 2018; van Rossum
2025). Not only AI enthusiasts, but even some scholars whose expertise concentrates on identifying
and critically interrogating ideologies and sociotechnical relationships โ such as historians and gender scholars โ unfortunately fall prey to the teleological belief that AI is an unstoppable force. They
embrace it because alternative responses seem too difficult, incompatible with industry developments,
or non-existent. Instead of falling for this, we should โrefuse [AI] adoption in schools and colleges,
and reject the narrative of its inevitability.โ (Reynoldson et al. 2025, n.p., also Benjamin 2016; Campolo and Crawford 2020; CDH Team and Ruddick 2025; Garcia et al. 2022; Kelly et al. 2025; Lysen
and Wyatt 2024; Sano-Franchini et al. 2024; Stengers 2018). Such rejection is possible and has historical precedent, to name just a few successful examples: Amsterdammers kicked out cars, rejecting
that cycling through the Dutch capital should be deadly. Organised workers died for the eight-hour
workday, the weekend and other workersโ rights, and governments banned chlorofluorocarbons from
fridges to mitigate ozone depletion in the atmosphere. And we know that even the tide itself famously
turns back. People can undo things; and we will (cf. Albanese 2025; Boztas 2025; Kohnstamm Instituut 2025; van Laarhoven and van Vugt 2025). Besides, there will be no future to embrace if we deskill
our students and selves, and allow the technology industryโs immense contributions to climate crisis
2. the strange but often repeated cultish mantra that we need to "embrace the future" โ this is so bizarre given, e.g. how destructive industry forces have proven to be in science, from petroleum to tobacco to pharmaceutical companies.
(Section 3.2 here doi.org/10.5281/zeno...)
4/n
06.09.2025 08:24 โ
๐ 463
๐ 104
๐ฌ 3
๐ 27
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users โ in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industryโs marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues.
Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAIโs ChatGPT and
Appleโs Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).
Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.
Protecting the Ecosystem of Human Knowledge: Five Principles
Finally! ๐คฉ Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...
We unpick the tech industryโs marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
06.09.2025 08:13 โ
๐ 3786
๐ 1897
๐ฌ 110
๐ 390
Something I didn't get to say yesterday:
We heard over and over during the event about "human-centered" approaches to "AI". But if refusal is not on the table (at every level: individual students and teachers right up through UNESCO) then we have in fact centered the technology, not the people.
03.09.2025 10:35 โ
๐ 681
๐ 238
๐ฌ 6
๐ 15
Since tomorrow is Labor Day and school is in session, I'd like to argue (again) that teaching about GenAI should include discussion of the extractive, exploitative labor conditions that make the technology possible. ๐งต
01.09.2025 01:21 โ
๐ 82
๐ 24
๐ฌ 4
๐ 1
This is an inevitable byproduct of automation. The AI boosters have been selling us that deskilling in some areas will result in upskilling in others, but we have no evidence of this. We don't even have a particularly good theory. We have salesmanship B.S.
28.08.2025 20:10 โ
๐ 220
๐ 70
๐ฌ 6
๐ 5
we live in a world defined by replacing human labor (employees) with machines which then need human babysitting (independent contractors on slave wages)*
27.08.2025 21:33 โ
๐ 245
๐ 74
๐ฌ 10
๐ 1
What frightens me isnโt just that people have stopped reading. Itโs that theyโve replaced reading with mimicry. A quote here, a post there, stitched together to sound like wisdom. And sadly, in this world where books gather dust and posts go viral, itโs very easy to confuse loudness for wisdom.
23.08.2025 07:33 โ
๐ 247
๐ 64
๐ฌ 11
๐ 3
Firing and demoralizing feminized jobs as enemies of the state while brazenly bribing men with violent jobs that almost instantly puts them into the middle of middle class is very basic gendered warfare. Fulfilling the manosphereโs promise.
15.08.2025 20:49 โ
๐ 7542
๐ 2410
๐ฌ 16
๐ 57
"What's propping up 'AI' is not 'the people.' It's the police. And it's the petroleum industry.
As such, when I hear educators insist that 'AI' is the future that we need to be preparing students for, I wonder why they're so willing to build a world of prisons and climate collapse."
15.08.2025 11:46 โ
๐ 40
๐ 18
๐ฌ 1
๐ 0
Kind of feel like it should be bigger national news that the poorest major city in the country is being forced to accept devastating service cuts to our public transit system because PA Republicans actively hate Philly and want to cause us pain
14.08.2025 21:32 โ
๐ 1464
๐ 299
๐ฌ 14
๐ 11
EFF has been saying this for years: School monitoring software sacrifices student privacy for unproven promises of safety.
13.08.2025 20:49 โ
๐ 107
๐ 39
๐ฌ 2
๐ 2
Both helpful tidbits to consider, ty! 2/3 reviews are some of the most positive I've ever gotten, one explicitly asked for only minor revisions. The negative review asks for v addressable nuance in a few spots. I'll likely keep my pride in tact & move on, but am tempted to politely nudge the editor
11.08.2025 21:12 โ
๐ 0
๐ 0
๐ฌ 0
๐ 0
In all likelihood I will simply take the L and move on from this journal and to a new one, but my pride wants the editors to more carefully consider the extremely affirming feedback two of the reviewers provided that seem overlooked in the summary ๐ฅฒ
11.08.2025 20:39 โ
๐ 1
๐ 0
๐ฌ 0
๐ 0
A very 2025 quandry: I received a rejection from a journal & the editor's summary of the reviews contains various inaccuracies, misnaming theoretical constructs and misattributing feedback to the wrong reviewers. My worry is that I got decisioned by AI. Is it petty write a note to rebut the summary?
11.08.2025 20:36 โ
๐ 1
๐ 0
๐ฌ 1
๐ 0
The answer to โhow to use AI responsiblyโ is โdonโt.โ
05.08.2025 18:25 โ
๐ 773
๐ 233
๐ฌ 6
๐ 4