My favorite line a professor here gave me in graduate school was that "Columbus is a nice place to live, but I wouldn't want to visit there." It probably less true now, but...
03.11.2025 23:37 β π 2 π 0 π¬ 1 π 0@jazzmanion12.bsky.social
Writing Across the Curriculum Coordinator, Center for the Study and Teaching of Writing at The (I know I know) Ohio State University. Jazz dad.
My favorite line a professor here gave me in graduate school was that "Columbus is a nice place to live, but I wouldn't want to visit there." It probably less true now, but...
03.11.2025 23:37 β π 2 π 0 π¬ 1 π 0Largest study of its kind shows AI assistants misrepresent news content 45% of the time β regardless of language or territory. www.bbc.co.uk/mediacentre/...
23.10.2025 17:17 β π 675 π 386 π¬ 21 π 72Okay so a I have a true little story about how the episcopal church funded the local BPP chapterβs ambulance program and all Iβm saying is, Episcopalians: they may surprise you!
15.10.2025 17:18 β π 335 π 32 π¬ 3 π 5Screen cap of parodic version of William Blake's "The Tyger" that begins: Tyger! Tyger! Burning bright (Not sure if I spelled that right) What immortal hand or eye Could fashion such a stripy guy? What the hammer that hath hewn it Into such a chonky unit? Did who made the lamb make thee, Or an external franchisee?
In honor of National Poetry Day, the greatest parody rewrite of all time:
02.10.2025 15:16 β π 3731 π 1439 π¬ 39 π 61If you use any of my How LLMs Work blog posts in your classes, could you please let me know. I'm applying for a thing π.
The Intuition Behind How Large Language Models Work
medium.com/@mark-riedl/...
A Very Gentle Introduction to Large Language Models without the Hype
medium.com/@mark-riedl/...
If you've been trying to get a read on "AI literacy," the UVA Teaching Hub has a great new collection for you. "Frameworks and Activities for Fostering AI Literacy" has been curated by librarian Bethany Mickel and instructional designer Fang Li. teaching.virginia.edu/collections/...
23.09.2025 13:32 β π 3 π 1 π¬ 0 π 0Okay, this oneβs a little bit diffferent β a quick video from me, wondering if βTaylorβs Versionβ might give us a glimpse to a way forward in the war against AI robots scraping up all the content of the internet without consent or compensation. youtu.be/X7jbRY3MvpQ
23.09.2025 17:04 β π 7 π 2 π¬ 1 π 0in education, creative industries, caring professions, and LABOR ORGANIZING, people are doing things, there's a movement
against-a-i.com
www.bloodinthemachine.com/p/the-luddit...
Reading papers is a basic skill (and dare I say duty?) of scientists, and I include medical doctors in that group. Keeping abreast of the literature is a foundational part of our professions. There arenβt good shortcuts. In any case, reading papers regularly is fun.
arstechnica.com/ai/2025/09/s...
βOpenAIβs own advanced reasoning models actually hallucinated more frequently than simpler systems. The companyβs o1 reasoning model βhallucinated 16% of the timeβ when summarizing public information, while newer models o3 and o4-mini βhallucinated 33% and 48% of the time, respectively.ββ
20.09.2025 12:09 β π 10 π 4 π¬ 1 π 0I'm on the fence about some of this, but it takes the current problems seriously. Even the early, idealistic ideology of OA was premised on the idea that access above all meant 1. digital and 2. access above all to stuff.
11.09.2025 10:37 β π 32 π 7 π¬ 1 π 0like so many products to come out of silicon valley in the last decade or so, AI is a legal innovation masquerading as a tech innovation. the legal theory seems to be that AI is entitled to everything but liable for nothing.
27.08.2025 18:10 β π 1769 π 504 π¬ 24 π 13This is a very good post to send to your friends and colleagues who use AI, teaching them how to learn how to get more correct answers out of LLMs. AI haters should read this too, as itβs research-based and you can see how these systems work when you encounter them.
07.09.2025 14:33 β π 319 π 67 π¬ 51 π 11Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users β in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industryβs marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.
Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAIβs ChatGPT and Appleβs Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).
Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.
Protecting the Ecosystem of Human Knowledge: Five Principles
Finally! π€© Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...
We unpick the tech industryβs marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
Apologies: You Have Reached the End of Your Free-Trial Period of America! βWe are retaining some features for premium users. Want rule of law? Thatβs premium. The right to run your company without government interference? Thatβs a paid feature now.β [theatlantic.com]
03.09.2025 13:08 β π 234 π 68 π¬ 7 π 3When you're working on a solo project and you lose so much momentum that everything grinds to a screeching halt, what you are suffering from is a lack-of-feedback loop. It's time to show what you have to someone who is not you.
Is what I'm doing worth it? Is it good? Entertaining? GO FIND OUT!
If people want some of the things AI can do, have those technologies be provided by tools & platforms created by cooperatives, by unions, by universities and governments, by municipalities and by individual creators, or as open source owned by nobody. Destroy the economic value of proprietary tools.
26.08.2025 20:32 β π 551 π 97 π¬ 14 π 19It enrages me how useful grant proposals are lol
Also, related, one of the ways that cancelling future grant opportunities harms science is that it scales back a process that often helps scientists really think through our ideas
When we do less of it, the science can suffer
Screenshot of title page of article published in the Journal of Applied Research in Memory and Cognition titled "Expert Thinking With Generative Chatbots."
Great article with one of the best brief layperson introductions to AI v. LLMs that Iβve seen. And I love the exploration of whether and how LLMsβ are useful depends on the userβs level of expertise. #PsychSciSky #AcademicSky #EduSky
doi.org/10.1037/mac0...
CW: Child Abuse
James Dobson was a monster. I grew up listening to Focus on the Family. My parents had every single one of his abusive, authoritarian parenting books. He wrote the manuals they used to destroy my childhood, and ruined any chance we had at a healthy relationship.
I used to sneak
π¨On βresistance to βtechnologies of disruption,β past and presentβ: the amazing Carolyn Lesjak on Luddite uprisings of the early 19th century and their necessary afterlives today π¨
14.08.2025 19:53 β π 5 π 2 π¬ 1 π 0This is *WILDLY* illegal, and of course points to egregious systemic issues of all kinds.
14.08.2025 15:20 β π 2 π 0 π¬ 0 π 0Which number of rain type were these? I imagine Chicago has its own century or so.
13.08.2025 13:04 β π 1 π 0 π¬ 1 π 0βThe AI in the study probably prompted doctors to become over-reliant on its recommendations, βleading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance,β the scientists said in the paper.β
12.08.2025 23:41 β π 5258 π 2581 π¬ 115 π 539Thanks for reading. I get this a lot. My whole career, actually. βYou donβt tell us what to do!β For one thing, I have a well-articulated position on advice. I try not to give it! But also, Iβm not an advice columnist. The biggest reason I donβt give any advice here is IT DEPENDS.
13.08.2025 01:57 β π 117 π 5 π¬ 3 π 1This from @tressiemcphd.bsky.social hit me over the head like a mallet of truth. This is the thing. This is what Iβve been trying to warn people about perfectly crystallized. www.nytimes.com/2025/08/12/o...
12.08.2025 12:02 β π 1222 π 420 π¬ 15 π 26I really needed this today.
Ayanna Thompson sent me a link to this jaw-dropping thing they built with a Mellon grant at @acmrs.bsky.social
You can get lost in it.
A spectacular reminder that digital resources don't have to be about surveillance, coercion, & disciplining the labor force.
Any narratyve of technological inevitabilitye ys false. Narratyves of technological inevitabilitye saye much about those tellinge them, litel about the future itself.
05.08.2025 19:29 β π 84 π 27 π¬ 1 π 0