Chris Manion (he/him)'s Avatar

Chris Manion (he/him)

@jazzmanion12.bsky.social

Writing Across the Curriculum Coordinator, Center for the Study and Teaching of Writing at The (I know I know) Ohio State University. Jazz dad.

46 Followers  |  93 Following  |  38 Posts  |  Joined: 19.09.2023  |  2.3093

Latest posts by jazzmanion12.bsky.social on Bluesky

My favorite line a professor here gave me in graduate school was that "Columbus is a nice place to live, but I wouldn't want to visit there." It probably less true now, but...

03.11.2025 23:37 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory An intensive international study was coordinated by the European Broadcasting Union (EBU) and led by the BBC

Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory. www.bbc.co.uk/mediacentre/...

23.10.2025 17:17 β€” πŸ‘ 675    πŸ” 386    πŸ’¬ 21    πŸ“Œ 72

Okay so a I have a true little story about how the episcopal church funded the local BPP chapter’s ambulance program and all I’m saying is, Episcopalians: they may surprise you!

15.10.2025 17:18 β€” πŸ‘ 335    πŸ” 32    πŸ’¬ 3    πŸ“Œ 5
Screen cap of parodic version of William Blake's "The Tyger" that begins:
Tyger! Tyger! Burning bright
(Not sure if I spelled that right) 
What immortal hand or eye
Could fashion such a stripy guy? 
What the hammer that hath hewn it 
Into such a chonky unit?
Did who made the lamb make thee, 
Or an external franchisee?

Screen cap of parodic version of William Blake's "The Tyger" that begins: Tyger! Tyger! Burning bright (Not sure if I spelled that right) What immortal hand or eye Could fashion such a stripy guy? What the hammer that hath hewn it Into such a chonky unit? Did who made the lamb make thee, Or an external franchisee?

In honor of National Poetry Day, the greatest parody rewrite of all time:

02.10.2025 15:16 β€” πŸ‘ 3731    πŸ” 1439    πŸ’¬ 39    πŸ“Œ 61
Preview
The Intuition Behind How Large Language Models Work, Part I Large Language Models (LLMs) are fancy artificial neural networks. But you don’t have time to learn the math or engineering. Unfortunately…

If you use any of my How LLMs Work blog posts in your classes, could you please let me know. I'm applying for a thing πŸ™.

The Intuition Behind How Large Language Models Work
medium.com/@mark-riedl/...

A Very Gentle Introduction to Large Language Models without the Hype
medium.com/@mark-riedl/...

30.09.2025 01:30 β€” πŸ‘ 50    πŸ” 17    πŸ’¬ 4    πŸ“Œ 0
Preview
Education report calling for ethical AI use contains over 15 fake sources Experts find fake sources in Canadian government report that took 18 months to complete.

Lmaooooooo

23.09.2025 20:42 β€” πŸ‘ 760    πŸ” 283    πŸ’¬ 20    πŸ“Œ 34
Preview
Frameworks and Activities for Fostering AI Literacy β€” UVA Teaching Hub <p>This collection features frameworks for understanding AI literacyβ€”including one framework developed here at UVAβ€”as well as classroom activities that support the development of AI literacy in both s...

If you've been trying to get a read on "AI literacy," the UVA Teaching Hub has a great new collection for you. "Frameworks and Activities for Fostering AI Literacy" has been curated by librarian Bethany Mickel and instructional designer Fang Li. teaching.virginia.edu/collections/...

23.09.2025 13:32 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Taylor's Version vs the AI Robot Invasion
YouTube video by Anil Dash Taylor's Version vs the AI Robot Invasion

Okay, this one’s a little bit diffferent β€” a quick video from me, wondering if β€œTaylor’s Version” might give us a glimpse to a way forward in the war against AI robots scraping up all the content of the internet without consent or compensation. youtu.be/X7jbRY3MvpQ

23.09.2025 17:04 β€” πŸ‘ 7    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Preview
The Luddite Renaissance is in full swing This fall, the new luddites are rising

in education, creative industries, caring professions, and LABOR ORGANIZING, people are doing things, there's a movement

against-a-i.com

www.bloodinthemachine.com/p/the-luddit...

21.09.2025 14:56 β€” πŸ‘ 289    πŸ” 95    πŸ’¬ 4    πŸ“Œ 8
Preview
Science journalists find ChatGPT is bad at summarizing scientific papers LLM β€œtended to sacrifice accuracy for simplicity” when writing news briefs.

Reading papers is a basic skill (and dare I say duty?) of scientists, and I include medical doctors in that group. Keeping abreast of the literature is a foundational part of our professions. There aren’t good shortcuts. In any case, reading papers regularly is fun.
arstechnica.com/ai/2025/09/s...

21.09.2025 06:22 β€” πŸ‘ 280    πŸ” 127    πŸ’¬ 10    πŸ“Œ 16

β€œOpenAI’s own advanced reasoning models actually hallucinated more frequently than simpler systems. The company’s o1 reasoning model β€˜hallucinated 16% of the time’ when summarizing public information, while newer models o3 and o4-mini β€˜hallucinated 33% and 48% of the time, respectively.’”

20.09.2025 12:09 β€” πŸ‘ 10    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0

I'm on the fence about some of this, but it takes the current problems seriously. Even the early, idealistic ideology of OA was premised on the idea that access above all meant 1. digital and 2. access above all to stuff.

11.09.2025 10:37 β€” πŸ‘ 32    πŸ” 7    πŸ’¬ 1    πŸ“Œ 0
Preview
Ohio State University prohibits land acknowledgements under most circumstances A land acknowledgement is a verbal or written statement that recognizes that land was taken from Indigenous people. Ohio State's policy says they are considered statements "on behalf of an issue or ca...

www.wosu.org/politics-gov...

09.09.2025 19:47 β€” πŸ‘ 3    πŸ” 6    πŸ’¬ 0    πŸ“Œ 4

like so many products to come out of silicon valley in the last decade or so, AI is a legal innovation masquerading as a tech innovation. the legal theory seems to be that AI is entitled to everything but liable for nothing.

27.08.2025 18:10 β€” πŸ‘ 1769    πŸ” 504    πŸ’¬ 24    πŸ“Œ 13

This is a very good post to send to your friends and colleagues who use AI, teaching them how to learn how to get more correct answers out of LLMs. AI haters should read this too, as it’s research-based and you can see how these systems work when you encounter them.

07.09.2025 14:33 β€” πŸ‘ 319    πŸ” 67    πŸ’¬ 51    πŸ“Œ 11
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users β€” in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues.

Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users β€” in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industry’s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Protecting the Ecosystem of Human Knowledge: Five Principles

Protecting the Ecosystem of Human Knowledge: Five Principles

Finally! 🀩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n

06.09.2025 08:13 β€” πŸ‘ 3292    πŸ” 1678    πŸ’¬ 102    πŸ“Œ 294
Preview
Apologies: You Have Reached the End of Your Free-Trial Period of America! Want rule of law? That’s premium.

Apologies: You Have Reached the End of Your Free-Trial Period of America! β€œWe are retaining some features for premium users. Want rule of law? That’s premium. The right to run your company without government interference? That’s a paid feature now.” [theatlantic.com]

03.09.2025 13:08 β€” πŸ‘ 234    πŸ” 68    πŸ’¬ 7    πŸ“Œ 3

When you're working on a solo project and you lose so much momentum that everything grinds to a screeching halt, what you are suffering from is a lack-of-feedback loop. It's time to show what you have to someone who is not you.

Is what I'm doing worth it? Is it good? Entertaining? GO FIND OUT!

01.09.2025 15:59 β€” πŸ‘ 135    πŸ” 23    πŸ’¬ 3    πŸ“Œ 6

If people want some of the things AI can do, have those technologies be provided by tools & platforms created by cooperatives, by unions, by universities and governments, by municipalities and by individual creators, or as open source owned by nobody. Destroy the economic value of proprietary tools.

26.08.2025 20:32 β€” πŸ‘ 551    πŸ” 97    πŸ’¬ 14    πŸ“Œ 19

It enrages me how useful grant proposals are lol

Also, related, one of the ways that cancelling future grant opportunities harms science is that it scales back a process that often helps scientists really think through our ideas

When we do less of it, the science can suffer

27.08.2025 01:21 β€” πŸ‘ 220    πŸ” 24    πŸ’¬ 5    πŸ“Œ 2
Screenshot of title page of article published in the Journal of Applied Research in Memory and Cognition titled "Expert Thinking With Generative Chatbots."

Screenshot of title page of article published in the Journal of Applied Research in Memory and Cognition titled "Expert Thinking With Generative Chatbots."

Great article with one of the best brief layperson introductions to AI v. LLMs that I’ve seen. And I love the exploration of whether and how LLMs’ are useful depends on the user’s level of expertise. #PsychSciSky #AcademicSky #EduSky
doi.org/10.1037/mac0...

25.08.2025 12:02 β€” πŸ‘ 30    πŸ” 7    πŸ’¬ 1    πŸ“Œ 2

CW: Child Abuse

James Dobson was a monster. I grew up listening to Focus on the Family. My parents had every single one of his abusive, authoritarian parenting books. He wrote the manuals they used to destroy my childhood, and ruined any chance we had at a healthy relationship.

I used to sneak

21.08.2025 15:57 β€” πŸ‘ 715    πŸ” 172    πŸ’¬ 38    πŸ“Œ 25
Blood in the Machine: The Origins of the Rebellion against Big Tech | Critical AI | Duke University Press

🚨On β€œresistance to β€˜technologies of disruption,’ past and present”: the amazing Carolyn Lesjak on Luddite uprisings of the early 19th century and their necessary afterlives today 🚨

14.08.2025 19:53 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

This is *WILDLY* illegal, and of course points to egregious systemic issues of all kinds.

14.08.2025 15:20 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Which number of rain type were these? I imagine Chicago has its own century or so.

13.08.2025 13:04 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
AI Eroded Doctors’ Ability to Spot Cancer Within Months in Study Artificial intelligence, touted for its potential to transform medicine, led to some doctors losing skills after just a few months in a new study.

β€œThe AI in the study probably prompted doctors to become over-reliant on its recommendations, β€˜leading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance,’ the scientists said in the paper.”

12.08.2025 23:41 β€” πŸ‘ 5258    πŸ” 2581    πŸ’¬ 115    πŸ“Œ 539

Thanks for reading. I get this a lot. My whole career, actually. β€œYou don’t tell us what to do!” For one thing, I have a well-articulated position on advice. I try not to give it! But also, I’m not an advice columnist. The biggest reason I don’t give any advice here is IT DEPENDS.

13.08.2025 01:57 β€” πŸ‘ 117    πŸ” 5    πŸ’¬ 3    πŸ“Œ 1
Post image

This from @tressiemcphd.bsky.social hit me over the head like a mallet of truth. This is the thing. This is what I’ve been trying to warn people about perfectly crystallized. www.nytimes.com/2025/08/12/o...

12.08.2025 12:02 β€” πŸ‘ 1222    πŸ” 420    πŸ’¬ 15    πŸ“Œ 26
Preview
Throughlines β€” Race in the premodern classroom Created by field-leading scholars, Throughlines’ pedagogical approaches offer accessible and critical ways to incorporate discussions of race in the premodern studies classroom.

I really needed this today.

Ayanna Thompson sent me a link to this jaw-dropping thing they built with a Mellon grant at @acmrs.bsky.social

You can get lost in it.

A spectacular reminder that digital resources don't have to be about surveillance, coercion, & disciplining the labor force.

05.08.2025 22:52 β€” πŸ‘ 145    πŸ” 66    πŸ’¬ 0    πŸ“Œ 6

Any narratyve of technological inevitabilitye ys false. Narratyves of technological inevitabilitye saye much about those tellinge them, litel about the future itself.

05.08.2025 19:29 β€” πŸ‘ 84    πŸ” 27    πŸ’¬ 1    πŸ“Œ 0

@jazzmanion12 is following 20 prominent accounts