Tess Bernhard's Avatar

Tess Bernhard

@tessbern.bsky.social

Postdoc @ Penn GSE | 2024 NAEd/Spencer diss fellow | science teacher educator | thinking/worrying about how edtech is reshaping classroom teaching www.tessbernhard.com

520 Followers  |  303 Following  |  30 Posts  |  Joined: 19.10.2023
Posts Following

Posts by Tess Bernhard (@tessbern.bsky.social)

autonomous vehicles are an abject lesson about the impossibly complex challenge of trying to effectively navigate a dynamic natural world automatically. the entirety of the field's scholarly & engineering work might be most valuable as case studies in why uncritical automation is a fraught endeavor.

14.01.2026 15:31 โ€” ๐Ÿ‘ 57    ๐Ÿ” 12    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 1

for the love of god, AI systems aren't "just tools that are neither good nor evil in and of themselves". AI systems *are* tools of capitalism. they exist as tools of capitalism. there is no AI system "in and of itself" outside of capitalism

29.12.2025 16:24 โ€” ๐Ÿ‘ 2709    ๐Ÿ” 639    ๐Ÿ’ฌ 11    ๐Ÿ“Œ 73
Video thumbnail

"This promise of an AI future, is really just a collective anxiety that wealthy people have about how well they're gonna be able to control us in the future."

- @tressiemcphd.bsky.social with an absolute mic drop moment about AI bullshit.

Incredible words.
Listen to all of it!

19.12.2025 12:25 โ€” ๐Ÿ‘ 8802    ๐Ÿ” 4077    ๐Ÿ’ฌ 86    ๐Ÿ“Œ 443

The "evidence-based" policy model is being abandoned in favour getting AI into schools for "testing in the wild" and that's giving big tech and edtech freedom to pump private tech into public education even harder than before. The "evidence" will only come later, once they're embedded.

29.11.2025 01:14 โ€” ๐Ÿ‘ 15    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

I don't understand how anyone can watch how blatantly Grok is manipulated to answer the way ownership desires it to and then act like the other LLM chatbots couldn't possibly be similarly but less obviously compromised to produce responses in whatever way corporate interests and priorities dictate.

23.11.2025 19:13 โ€” ๐Ÿ‘ 5998    ๐Ÿ” 1739    ๐Ÿ’ฌ 49    ๐Ÿ“Œ 111
Screenshot from linked article:

Learning is done best in community. The most important function of educational institutions is providing spaces for building that community. Every time we suggest a student would be well served by โ€œasking a chatbot,โ€ we are cutting off an opportunity for that student to engage with their classmates, instructors, or librarians. Those interactions, however small, are what constitute communities of learners.

Screenshot from linked article: Learning is done best in community. The most important function of educational institutions is providing spaces for building that community. Every time we suggest a student would be well served by โ€œasking a chatbot,โ€ we are cutting off an opportunity for that student to engage with their classmates, instructors, or librarians. Those interactions, however small, are what constitute communities of learners.

Yes! This also comes up in my recent contribution to CHE:

www.chronicle.com/article/how-...

23.11.2025 14:40 โ€” ๐Ÿ‘ 113    ๐Ÿ” 38    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 3

Journalist challenge: Use โ€œMachine Learningโ€ when you mean machine learning and โ€œLLMโ€ when you mean LLM. Ditch โ€œAIโ€ as a catch-all term, itโ€™s not useful for readers and it helps companies trying to confuse the public by obscuring the roles played by different technologies. ๐Ÿงช

22.11.2025 16:50 โ€” ๐Ÿ‘ 3711    ๐Ÿ” 1373    ๐Ÿ’ฌ 58    ๐Ÿ“Œ 128
Preview
How to turn off Gemini in your Gmail, Docs, Photos, and more - it's easy to opt out It's a little hidden, but there is a way to remove Gemini from your favorite Google services.

Turn off Gemini in Google!
It's been turned on by default.

Go to your Drive, click the gear icon, click settings, go to "manage apps" and uncheck that nasty "use by default" box.

www.zdnet.com/article/how-...

27.10.2025 19:10 โ€” ๐Ÿ‘ 1002    ๐Ÿ” 701    ๐Ÿ’ฌ 4    ๐Ÿ“Œ 14

"Help you read faster" is so dishonest lol, like yeah if you consider "not doing the reading" as "reading faster"

20.10.2025 12:52 โ€” ๐Ÿ‘ 37    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Without Our Consent When I wrote last weekโ€™s round-up of โ€œAIโ€-related news, I didnโ€™t include any of OpenAIโ€™s product releases, mostly because itโ€™s 2025 and Iโ€™m exhausted by this game that tech companies and tech journali...

"Too many people are busily promoting a version of "โ€˜AIโ€™ literacyโ€ that is simply training students how to use and consume 'AI' 'properly' โ€“ whatever that means โ€“ and refusing to admit that there may be no ethical usage of a fundamentally unethical, abusive technology," writes Audrey Watters.

08.10.2025 14:32 โ€” ๐Ÿ‘ 24    ๐Ÿ” 10    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 2
Bargaining Committee | Penn Postdoc Union

In a time when workers in higher ed need to be as organized and coordinated as ever to fight threats and build the institutions we deserve, I'm proud to announce I'll be working alongside this great crew on the newly formed RAPUP-UAW bargaining committee ๐Ÿซก pennpostdocunion.org/bargaining-c...

03.10.2025 14:20 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Join Joining the AAUP says that youโ€™re concerned about academic freedom, and about the way that basic freedom protects your teaching and research. Join today.

The @aaup.org beat Marco Rubio in court, with a Reagan-appointed judge ruling that ideological deportations obviously violate the first amendment.

These cases cost money. And we need millions of people moving together to make them stick. If you're faculty in the US, join AAUP and join the fight.

01.10.2025 15:11 โ€” ๐Ÿ‘ 135    ๐Ÿ” 41    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

I have a theory - much like how if you live in New York City for long enough you become culturally Jewish in a way, if you hang out online for too long you end up culturally Philadelphian

15.09.2025 01:09 โ€” ๐Ÿ‘ 1112    ๐Ÿ” 122    ๐Ÿ’ฌ 14    ๐Ÿ“Œ 65
Alphaโ€™s privacy policy accounts for this sort of tracking and more, claiming far more access to student information than is typical for companies selling AI to schools, including MagicSchool. Alpha can, for example, use webcams to record students, including to observe their eye contact (partly to detect engagement and environmental distractions). And it can monitor keyboard and mouse activity (to see if students are idle) and take screenshots and video of what students are seeing on-screen (in part to catch cheating). In the future, the policy notes, the school could collect data from sleep trackers or headbands worn during meditation.

Alphaโ€™s privacy policy accounts for this sort of tracking and more, claiming far more access to student information than is typical for companies selling AI to schools, including MagicSchool. Alpha can, for example, use webcams to record students, including to observe their eye contact (partly to detect engagement and environmental distractions). And it can monitor keyboard and mouse activity (to see if students are idle) and take screenshots and video of what students are seeing on-screen (in part to catch cheating). In the future, the policy notes, the school could collect data from sleep trackers or headbands worn during meditation.

This description of Alpha School really would make Foucault melt.

www.bloomberg.com/news/feature...

08.09.2025 16:03 โ€” ๐Ÿ‘ 12    ๐Ÿ” 5    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
 But what iseducation? Embodied and enactive cognitive sciences remind us that knowledge is not something discrete that an individual possesses and passes down but an inherently dynamic and evolving endeavour that develops in the process of embodied, curious and engaged interaction through dialogue. The pinnacle of cognition, particularly โ€˜human  knowingโ€™, is inextricably interwoven with interactions we engage in

But what iseducation? Embodied and enactive cognitive sciences remind us that knowledge is not something discrete that an individual possesses and passes down but an inherently dynamic and evolving endeavour that develops in the process of embodied, curious and engaged interaction through dialogue. The pinnacle of cognition, particularly โ€˜human knowingโ€™, is inextricably interwoven with interactions we engage in

 with each other and the physical, cultural and social world we inhabit, โ€˜so much so that individuals are not thinkable outside of their interactions and embeddedness in their (social) worldโ€™ (De Jaegher, 2019). Dialogic models of knowledge and education emphasize that interactions between a student and teacher and/or peer provide โ€˜scaffoldingโ€™ for how that child understands the world. The always in flux, active and continually transforming nature of human cognition necessitates that education be fundamentally an ongoing activity. Far from the reductionist view whereby โ€˜formal knowledgeโ€™ can be packaged and acquired from an LLM, the classroom is an environment where love, trust, empathy, care and humility are fostered and mutually cultivated through dialogical interactions

with each other and the physical, cultural and social world we inhabit, โ€˜so much so that individuals are not thinkable outside of their interactions and embeddedness in their (social) worldโ€™ (De Jaegher, 2019). Dialogic models of knowledge and education emphasize that interactions between a student and teacher and/or peer provide โ€˜scaffoldingโ€™ for how that child understands the world. The always in flux, active and continually transforming nature of human cognition necessitates that education be fundamentally an ongoing activity. Far from the reductionist view whereby โ€˜formal knowledgeโ€™ can be packaged and acquired from an LLM, the classroom is an environment where love, trust, empathy, care and humility are fostered and mutually cultivated through dialogical interactions

In this short piece, I lean on embodied cog sci to argue that we should refuse & resist llms in education (pp. 53-58) unesdoc.unesco.org/in/documentV...

"the classroom is an environment where love, trust, empathy, care & humility are fostered & mutually cultivated through dialogical interactions"

07.09.2025 10:06 โ€” ๐Ÿ‘ 568    ๐Ÿ” 223    ๐Ÿ’ฌ 18    ๐Ÿ“Œ 27
3.2 We do not have to โ€˜embrace the futureโ€™ & we can turn back the tide
It must be the sheer magnitude of [artificial neural networksโ€™] incompetence that makes
them so popular.
Jerry A. Fodor (2000, p. 47)
Related to the rejection of expertise is the rejection of imagining a better future and the rejection
of self-determination free from industry forces (Hajer and Oomen 2025; Stengers 2018; van Rossum
2025). Not only AI enthusiasts, but even some scholars whose expertise concentrates on identifying
and critically interrogating ideologies and sociotechnical relationships โ€” such as historians and gender scholars โ€” unfortunately fall prey to the teleological belief that AI is an unstoppable force. They
embrace it because alternative responses seem too difficult, incompatible with industry developments,
or non-existent. Instead of falling for this, we should โ€œrefuse [AI] adoption in schools and colleges,
and reject the narrative of its inevitability.โ€ (Reynoldson et al. 2025, n.p., also Benjamin 2016; Campolo and Crawford 2020; CDH Team and Ruddick 2025; Garcia et al. 2022; Kelly et al. 2025; Lysen
and Wyatt 2024; Sano-Franchini et al. 2024; Stengers 2018). Such rejection is possible and has historical precedent, to name just a few successful examples: Amsterdammers kicked out cars, rejecting
that cycling through the Dutch capital should be deadly. Organised workers died for the eight-hour
workday, the weekend and other workersโ€™ rights, and governments banned chlorofluorocarbons from
fridges to mitigate ozone depletion in the atmosphere. And we know that even the tide itself famously
turns back. People can undo things; and we will (cf. Albanese 2025; Boztas 2025; Kohnstamm Instituut 2025; van Laarhoven and van Vugt 2025). Besides, there will be no future to embrace if we deskill
our students and selves, and allow the technology industryโ€™s immense contributions to climate crisis

3.2 We do not have to โ€˜embrace the futureโ€™ & we can turn back the tide It must be the sheer magnitude of [artificial neural networksโ€™] incompetence that makes them so popular. Jerry A. Fodor (2000, p. 47) Related to the rejection of expertise is the rejection of imagining a better future and the rejection of self-determination free from industry forces (Hajer and Oomen 2025; Stengers 2018; van Rossum 2025). Not only AI enthusiasts, but even some scholars whose expertise concentrates on identifying and critically interrogating ideologies and sociotechnical relationships โ€” such as historians and gender scholars โ€” unfortunately fall prey to the teleological belief that AI is an unstoppable force. They embrace it because alternative responses seem too difficult, incompatible with industry developments, or non-existent. Instead of falling for this, we should โ€œrefuse [AI] adoption in schools and colleges, and reject the narrative of its inevitability.โ€ (Reynoldson et al. 2025, n.p., also Benjamin 2016; Campolo and Crawford 2020; CDH Team and Ruddick 2025; Garcia et al. 2022; Kelly et al. 2025; Lysen and Wyatt 2024; Sano-Franchini et al. 2024; Stengers 2018). Such rejection is possible and has historical precedent, to name just a few successful examples: Amsterdammers kicked out cars, rejecting that cycling through the Dutch capital should be deadly. Organised workers died for the eight-hour workday, the weekend and other workersโ€™ rights, and governments banned chlorofluorocarbons from fridges to mitigate ozone depletion in the atmosphere. And we know that even the tide itself famously turns back. People can undo things; and we will (cf. Albanese 2025; Boztas 2025; Kohnstamm Instituut 2025; van Laarhoven and van Vugt 2025). Besides, there will be no future to embrace if we deskill our students and selves, and allow the technology industryโ€™s immense contributions to climate crisis

2. the strange but often repeated cultish mantra that we need to "embrace the future" โ€” this is so bizarre given, e.g. how destructive industry forces have proven to be in science, from petroleum to tobacco to pharmaceutical companies.

(Section 3.2 here doi.org/10.5281/zeno...)
4/n

06.09.2025 08:24 โ€” ๐Ÿ‘ 463    ๐Ÿ” 104    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 27
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users โ€” in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industryโ€™s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues.

Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users โ€” in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industryโ€™s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAIโ€™s ChatGPT and
Appleโ€™s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAIโ€™s ChatGPT and Appleโ€™s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Protecting the Ecosystem of Human Knowledge: Five Principles

Protecting the Ecosystem of Human Knowledge: Five Principles

Finally! ๐Ÿคฉ Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industryโ€™s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n

06.09.2025 08:13 โ€” ๐Ÿ‘ 3786    ๐Ÿ” 1897    ๐Ÿ’ฌ 110    ๐Ÿ“Œ 390

Something I didn't get to say yesterday:

We heard over and over during the event about "human-centered" approaches to "AI". But if refusal is not on the table (at every level: individual students and teachers right up through UNESCO) then we have in fact centered the technology, not the people.

03.09.2025 10:35 โ€” ๐Ÿ‘ 681    ๐Ÿ” 238    ๐Ÿ’ฌ 6    ๐Ÿ“Œ 15

Since tomorrow is Labor Day and school is in session, I'd like to argue (again) that teaching about GenAI should include discussion of the extractive, exploitative labor conditions that make the technology possible. ๐Ÿงต

01.09.2025 01:21 โ€” ๐Ÿ‘ 82    ๐Ÿ” 24    ๐Ÿ’ฌ 4    ๐Ÿ“Œ 1

This is an inevitable byproduct of automation. The AI boosters have been selling us that deskilling in some areas will result in upskilling in others, but we have no evidence of this. We don't even have a particularly good theory. We have salesmanship B.S.

28.08.2025 20:10 โ€” ๐Ÿ‘ 220    ๐Ÿ” 70    ๐Ÿ’ฌ 6    ๐Ÿ“Œ 5

we live in a world defined by replacing human labor (employees) with machines which then need human babysitting (independent contractors on slave wages)*

27.08.2025 21:33 โ€” ๐Ÿ‘ 245    ๐Ÿ” 74    ๐Ÿ’ฌ 10    ๐Ÿ“Œ 1

What frightens me isnโ€™t just that people have stopped reading. Itโ€™s that theyโ€™ve replaced reading with mimicry. A quote here, a post there, stitched together to sound like wisdom. And sadly, in this world where books gather dust and posts go viral, itโ€™s very easy to confuse loudness for wisdom.

23.08.2025 07:33 โ€” ๐Ÿ‘ 247    ๐Ÿ” 64    ๐Ÿ’ฌ 11    ๐Ÿ“Œ 3

Firing and demoralizing feminized jobs as enemies of the state while brazenly bribing men with violent jobs that almost instantly puts them into the middle of middle class is very basic gendered warfare. Fulfilling the manosphereโ€™s promise.

15.08.2025 20:49 โ€” ๐Ÿ‘ 7542    ๐Ÿ” 2410    ๐Ÿ’ฌ 16    ๐Ÿ“Œ 57

"What's propping up 'AI' is not 'the people.' It's the police. And it's the petroleum industry.

As such, when I hear educators insist that 'AI' is the future that we need to be preparing students for, I wonder why they're so willing to build a world of prisons and climate collapse."

15.08.2025 11:46 โ€” ๐Ÿ‘ 40    ๐Ÿ” 18    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Kind of feel like it should be bigger national news that the poorest major city in the country is being forced to accept devastating service cuts to our public transit system because PA Republicans actively hate Philly and want to cause us pain

14.08.2025 21:32 โ€” ๐Ÿ‘ 1464    ๐Ÿ” 299    ๐Ÿ’ฌ 14    ๐Ÿ“Œ 11

EFF has been saying this for years: School monitoring software sacrifices student privacy for unproven promises of safety.

13.08.2025 20:49 โ€” ๐Ÿ‘ 107    ๐Ÿ” 39    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 2

Both helpful tidbits to consider, ty! 2/3 reviews are some of the most positive I've ever gotten, one explicitly asked for only minor revisions. The negative review asks for v addressable nuance in a few spots. I'll likely keep my pride in tact & move on, but am tempted to politely nudge the editor

11.08.2025 21:12 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

In all likelihood I will simply take the L and move on from this journal and to a new one, but my pride wants the editors to more carefully consider the extremely affirming feedback two of the reviewers provided that seem overlooked in the summary ๐Ÿฅฒ

11.08.2025 20:39 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

A very 2025 quandry: I received a rejection from a journal & the editor's summary of the reviews contains various inaccuracies, misnaming theoretical constructs and misattributing feedback to the wrong reviewers. My worry is that I got decisioned by AI. Is it petty write a note to rebut the summary?

11.08.2025 20:36 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

The answer to โ€œhow to use AI responsiblyโ€ is โ€œdonโ€™t.โ€

05.08.2025 18:25 โ€” ๐Ÿ‘ 773    ๐Ÿ” 233    ๐Ÿ’ฌ 6    ๐Ÿ“Œ 4