Very worth the read for precision about LLMS/AI and discussion of metaphor.
More Work for Mother
It's the quiet, almost plaintive fear here that gets me. It's got a "Twilight Zone" feel.
“We told people before, and after, to be careful, that the AI is going to be (or was) biased, and nothing helped,” Naaman said.
Yes. Perhaps the most important part: the students are quickly becoming disillusioned.
My experience: students sense that they need to learn without AI but they don't understand how or why to navigate the current university and career landscape without it.
I finally lost my shit about so-called AIs, LLMs, enshittification & everything fucking evil that @officialgrammarly.bsky.social is doing. I'm fucking furious, and not just about what 1 company has done. An open letter to Grammarly & the rest of the LLM hype machine www.moryan.com/an-open-lett...
You cannot really run a university like a business because our most valuable product is failure.
@alokvmenon.bsky.social is so good on this topic.
In my classes I am pushing against grammarly and many students are very confused. They are *sure* I value technical correctness over evidence of thinking.
Kids are being taught by AI tools that the hallmarks of AI writing (uniformity and technical correctness) are paramount, and then this is reinforced by standardized tests that reward these hallmarks.
If you lead with anger, if your politics is built out of grievance, then you’re always looking for a target for your anger. But the path to liberation is one built out of love, out of wanting to no longer feel hatred in your heart.
I don’t want revenge. I want everyone to be free.
Reading @sarahkendzior.bsky.social convinces me that it can be both... There is a 12 dimensional plan and part of the plan is putting stupid evil people in power and capitalizing on their worst impulses.
My new Strange Horizons essay is up! With a deliberately provocative title!
WHY ALL SCIENCE FICTION AND FANTASY WRITERS ARE HISTORIANS
Yes, *all*
And, no, it isn't just for the reason you think...
I've been working toward this one for a while, very excited to share it!
Describing myself to someone in tech/science who might know of a job opening:
"Just make sure they know I'm a hard humanist, okay?"
I need to think more about how to bring this into the classroom.
How can I create a better framework for caring about source when there is no institutional method for this?
Seeing now how the threshold issue - making it a problem of proving source up to pre-AI plagiarism University standards - has enabled some of this shift in discourse to content over source.
But this burden of proof was for a different world and a different set of presumed sources.
I'm also asking for writing that focuses on personal and sensory experience - but we're still talking about content and not how it helps me filter for source - because it takes so much more effort to get the LLM to produce even minimally convincing writing about lived experience.
My "common sense" pivot has been to focus the course policy on content - I can't prove whether or not you're using AI, so you have to turn in writing that demonstrates judgment and expertise.
(Relevant that this is online asynchronous teaching - I can't just use blue books in the classroom).
But when AI detection tools failed to meet this standard, and admin started pushing AI integration, we had to start pivoting immediately - often during the semester, rewriting assignments on the fly (I'm currently on a 20 minute break from this task).
Seems to me that the early focus on genAI plagiarism in education has had compounding consequences for our ability to talk about epistemic vigilance.
At first, all the focus was on source - and specifically on the burden on instructors to prove source to Academic Integrity Policy standards.
This is a shift from previous policy that focused on detecting genAI plagiarism.
So, I have shifted from source-based epistemic vigilance to content-based.
This thread is helping me think about how this could be a problem.
Great post about LLMs and "epistemic vigilance." Helping me think through my class AI policy, which currently focuses on the idea that because writers are responsible for their own output, they must develop sufficient expertise and judgment to evaluate it, regardless of genAI use/input.
I keep reminding people that admins hate English departments because they are popular (read: inefficient), not because they aren’t. AI is in a long line of technologies that promise to solve that problem for them.
aka, can I still call myself YIMBY if we're talking about concentration camps?
*on an island, a BAFTA exec, a BBC exec, and a couple American politicians*
"I wonder what would happen if the disability/neurodivergence people and the BLM people ever REALLY started organizing together?"
"Oooh, yeah, we don't want that. How could we set that back by about 10 years?"
"If it's not positronic the problems are chronic"
I remember playing dress-up as a kid in the '80s and making my little sister be the man because obvs the ladies got all the flashiest softest flowiest things. Very cognitive dissonance to realize that liking the best clothes supposedly meant I couldn't like science, books, and dirt.
Teaching Omelas and marveling at Le Guin this week.
"But what can anyone do - the narrator says helping the abused child is impossible because it would bring the whole system down."
"Okay, and what do we think of such a system - and such a narrator?"
The upshot is that all of this was very obvious even when AI looked like Robbie the Robot.
The loss of technological/speculative literacy and curiosity is staggering.
In my class we're working on this framing by discussing midcentury robot stories. The apocalypses show robots carrying out flawed human orders with the help of flawed humans. Utopia stories all have very obvious deuses ex machina that differ radically from current genAI/LLM models.