Brianna Hyslop's Avatar

Brianna Hyslop

@behyslop.bsky.social

Sort-of Academic, plant obsessed, fiber arts enthusiast. Now back in NE Wisconsin.

143 Followers  |  343 Following  |  65 Posts  |  Joined: 21.06.2023  |  2.274

Latest posts by behyslop.bsky.social on Bluesky

right. given the numbers, โ€œhow to win back the working classโ€ should be as much about care and service workers as hard hats. and yet.

25.10.2025 12:02 โ€” ๐Ÿ‘ 4944    ๐Ÿ” 460    ๐Ÿ’ฌ 34    ๐Ÿ“Œ 56

As you cancel streaming services, here is a casual reminder that only 16% of Americans read for pleasure anymore, and your local library has hundreds or thousands of books you haven't read.

They would love to see you stop by and renew your library card.

18.09.2025 19:55 โ€” ๐Ÿ‘ 13647    ๐Ÿ” 5526    ๐Ÿ’ฌ 253    ๐Ÿ“Œ 486
Why Language Models Hallucinate, by Kalai et al. 

Like students facing hard exam questions, large language models sometimes guess when
uncertain, producing plausible yet incorrect statements instead of admitting uncertainty. Such
โ€œhallucinationsโ€ persist even in state-of-the-art systems and undermine trust. We argue that
language models hallucinate because the training and evaluation procedures reward guessing over
acknowledging uncertainty, and we analyze the statistical causes of hallucinations in the modern
training pipeline. Hallucinations need not be mysteriousโ€”they originate simply as errors in binary
classification. If incorrect statements cannot be distinguished from facts, then hallucinations
in pretrained language models will arise through natural statistical pressures. We then argue
that hallucinations persist due to the way most evaluations are gradedโ€”language models are
optimized to be good test-takers, and guessing when uncertain improves test performance. This
โ€œepidemicโ€ of penalizing uncertain responses can only be addressed through a socio-technical
mitigation: modifying the scoring of existing benchmarks that are misaligned but dominate
leaderboards, rather than introducing additional hallucination evaluations. This change may
steer the field toward more trustworthy AI systems.

Why Language Models Hallucinate, by Kalai et al. Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty. Such โ€œhallucinationsโ€ persist even in state-of-the-art systems and undermine trust. We argue that language models hallucinate because the training and evaluation procedures reward guessing over acknowledging uncertainty, and we analyze the statistical causes of hallucinations in the modern training pipeline. Hallucinations need not be mysteriousโ€”they originate simply as errors in binary classification. If incorrect statements cannot be distinguished from facts, then hallucinations in pretrained language models will arise through natural statistical pressures. We then argue that hallucinations persist due to the way most evaluations are gradedโ€”language models are optimized to be good test-takers, and guessing when uncertain improves test performance. This โ€œepidemicโ€ of penalizing uncertain responses can only be addressed through a socio-technical mitigation: modifying the scoring of existing benchmarks that are misaligned but dominate leaderboards, rather than introducing additional hallucination evaluations. This change may steer the field toward more trustworthy AI systems.

Ironically, it appears that AI chatbots hallucinate for the same reason that students feel compelled to use them:

They were socialized in a high-stakes testing culture that rewards guessing and maybe getting it right over admitting when there's something you just don't know.

08.09.2025 10:41 โ€” ๐Ÿ‘ 1463    ๐Ÿ” 416    ๐Ÿ’ฌ 43    ๐Ÿ“Œ 58
Survey results from student voices flash survey on AI answered by 1047 students. 55% brainstorming ideas; 50% asking it questions like a tutor; 46% studying for exams or quizzes; 44% editing writing or checking work; 31%outlining papers; 26%generating citations; 42% using it like an advanced search engine; 25%completing assignments or coding work; 3*%generating summaries; 19%writing free responses or essays; 15% have not used it for coursework

Survey results from student voices flash survey on AI answered by 1047 students. 55% brainstorming ideas; 50% asking it questions like a tutor; 46% studying for exams or quizzes; 44% editing writing or checking work; 31%outlining papers; 26%generating citations; 42% using it like an advanced search engine; 25%completing assignments or coding work; 3*%generating summaries; 19%writing free responses or essays; 15% have not used it for coursework

Useful to see these responses. But context we need: 1) are they getting any guidance in "asking it questions like a tutor" (tutors do things other than answer questions) 2)what does brainstorming look like? In some contexts, brainstorming is critical thinking/1 www.insidehighered.com/news/student...

30.08.2025 12:02 โ€” ๐Ÿ‘ 17    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

you don't need chatGPT i am perfectly capable of drinking a bottle of water and lying to you

01.05.2025 14:37 โ€” ๐Ÿ‘ 15330    ๐Ÿ” 5209    ๐Ÿ’ฌ 61    ๐Ÿ“Œ 66

There was this phone number you could call and the entire purpose was to tell you the time and the temperature.

And if a friendโ€™s house had Call Waiting, you could pick a time theyโ€™d call time & temp and then youโ€™d call them so you could talk at night without the ringer waking up their parents.

14.07.2025 03:59 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

There's a lot of calling Crรฉmieux an academic and Chris Rufo a journalist going on in this news story accusing a man of misrepresenting his identity

07.07.2025 03:24 โ€” ๐Ÿ‘ 5448    ๐Ÿ” 1079    ๐Ÿ’ฌ 21    ๐Ÿ“Œ 24

โ€œAI toolsโ€ฆmay unintentionally hinder deep cognitive processing, retention, and authentic engagement with written material. If users rely heavily on AI tools, they may achieve superficial fluency but fail to internalize the knowledge or feel a sense of ownership over it.โ€

16.06.2025 12:22 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Hey look, I totally get if you say โ€œI donโ€™t trust people who use AI in any form, and thatโ€™s why Iโ€™m not putting my books on Koboโ€ but if you are putting your books on Amazon as you say that, I donโ€™t believe that the reason youโ€™re not putting your books on Kobo is that you donโ€™t like AI.

03.06.2025 14:08 โ€” ๐Ÿ‘ 395    ๐Ÿ” 32    ๐Ÿ’ฌ 7    ๐Ÿ“Œ 3
Preview
The Reading Struggle Meets AI The crisis has worsened, many professors say. Is it time to think differently?

In one campus study on reading, students had two main complaints. By far their biggest gripe is that the assigned reading rarely gets talked about in class. The second reason โ€” perhaps a more complicated one โ€” is that they donโ€™t understand what they are supposed to be reading for. chroni.cl/4mQCvs3

27.05.2025 11:28 โ€” ๐Ÿ‘ 49    ๐Ÿ” 17    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 7

hate it when content not created or approved by the newsroom happens to get printed in the end product

20.05.2025 14:27 โ€” ๐Ÿ‘ 62    ๐Ÿ” 15    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 0
Post image

This was just posted by @tbretc.bsky.social on another platform. The Chicago Sun-Times obviously gets ChatGPT to write a โ€˜summer readsโ€™ feature almost entirely made up of real authors but completely fake books. What are we coming to?

20.05.2025 11:04 โ€” ๐Ÿ‘ 12982    ๐Ÿ” 3828    ๐Ÿ’ฌ 771    ๐Ÿ“Œ 1885
Preview
Everyone Is Cheating Their Way Through College ChatGPT has unraveled the entire academic project.

โ€œMassive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterateโ€ฆBoth in the literal sense and in the sense of being historically illiterate and having no knowledge of their own culture, much less anyone elseโ€™s.โ€

07.05.2025 11:08 โ€” ๐Ÿ‘ 3827    ๐Ÿ” 1406    ๐Ÿ’ฌ 173    ๐Ÿ“Œ 915
Preview
a man with blonde hair wearing a knight 's armor smiles for the camera Alt: A man with blonde hair wearing a knight 's armor smiles and winks for the camera. This is a gif of actor Heath Ledger playing Sir William from A Knightโ€™s Tale (2001).

Plus:

26.04.2025 03:38 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Movie youโ€™ve watched more than six times using gifs.

(โ€œHard modeโ€ no Star Wars, Star Trek, or LOTR)

26.04.2025 03:11 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Pineapple lifesavers lived up to their name in my experience!

23.04.2025 21:18 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

you will pry my em dashesโ€”my favorite punctuational toolsโ€”from my cold dead hands

22.04.2025 17:08 โ€” ๐Ÿ‘ 102    ๐Ÿ” 18    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 0
Post image

Like many people, the recent glut of Studio-Ghibli-styled AI images has left a bad taste in my mouth.

My grandma (we called her โ€œgranarchโ€ because she was ๐˜ข๐˜ฏ๐˜ข๐˜ณ๐˜ค๐˜ฉ๐˜ช๐˜ค) wrote โ€œHowl's Moving Castleโ€, which was adapted by Hayao Miyazaki into a film of the same name.

02.04.2025 08:50 โ€” ๐Ÿ‘ 413    ๐Ÿ” 128    ๐Ÿ’ฌ 9    ๐Ÿ“Œ 5

poor Nintendo, announcing the Switch 2 just in time to have it cost ten thousand dollars

02.04.2025 20:44 โ€” ๐Ÿ‘ 6717    ๐Ÿ” 708    ๐Ÿ’ฌ 61    ๐Ÿ“Œ 36

I'm struck once again by the similarity in failure rate bt generative AI and "plagiarism detection software," which misses replicated source material 40-60% of the time. I don't think it's a coincidence. Most likely, it shows the threshold where tech CEOs feel they can bilk the credulous.

19.03.2025 11:33 โ€” ๐Ÿ‘ 6    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Gentlemanly capitalism - Wikipedia

As the lone lit grad student in some of Tony Hopkinโ€™s history seminars I didnโ€™t agree with everything he had to say, but he had a point: en.m.wikipedia.org/wiki/Gentlem...

28.02.2025 13:36 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

If youโ€™ve been as obsessed with the cars of the show as Iโ€™ve been:

21.02.2025 16:08 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

This is the stuff.

21.02.2025 00:21 โ€” ๐Ÿ‘ 58    ๐Ÿ” 9    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Passing on a note from a colleague: People with federal student loans should download their files IMMEDIATELY. These files are currently on the studentaid.gov website, which may get deleted if Trump follows through with his EO to shut down DOE

04.02.2025 19:06 โ€” ๐Ÿ‘ 1938    ๐Ÿ” 1346    ๐Ÿ’ฌ 91    ๐Ÿ“Œ 62
list of banned keywords

list of banned keywords

๐ŸšจBREAKING. From a program officer at the National Science Foundation, a list of keywords that can cause a grant to be pulled. I will be sharing screenshots of these keywords along with a decision tree. Please share widely. This is a crisis for academic freedom & science.

04.02.2025 01:26 โ€” ๐Ÿ‘ 27920    ๐Ÿ” 15813    ๐Ÿ’ฌ 1279    ๐Ÿ“Œ 3688
Post image Post image Post image

film stills #36-39

The goats of Severance.

#severance

02.02.2025 00:50 โ€” ๐Ÿ‘ 4478    ๐Ÿ” 239    ๐Ÿ’ฌ 148    ๐Ÿ“Œ 18
Scene from The Princess Bride where Vizzinni says "You're trying to kidnap what I've rightfully stolen"

Scene from The Princess Bride where Vizzinni says "You're trying to kidnap what I've rightfully stolen"

OpenAI right now

29.01.2025 20:26 โ€” ๐Ÿ‘ 44070    ๐Ÿ” 6350    ๐Ÿ’ฌ 290    ๐Ÿ“Œ 176

People worry about students treating school purely as a transaction and then gleefully shove them towards an AI-tutor and then wonder what's wrong. Madness.

16.01.2025 02:42 โ€” ๐Ÿ‘ 24    ๐Ÿ” 5    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

this is so good. this is so so so good. i'm running around my office screaming the lev manovich quote about how software is ideology

15.01.2025 15:35 โ€” ๐Ÿ‘ 4729    ๐Ÿ” 1247    ๐Ÿ’ฌ 48    ๐Ÿ“Œ 17

At least 11 librarians in LA lost their homes in the LA Fires. A thread of their gofundmes:

12.01.2025 21:37 โ€” ๐Ÿ‘ 1457    ๐Ÿ” 1219    ๐Ÿ’ฌ 12    ๐Ÿ“Œ 19

@behyslop is following 19 prominent accounts