Gretchen Marina Krueger's Avatar

Gretchen Marina Krueger

@gretchenkrueger.bsky.social

Research Fellow @BKCHarvard. Previously @openai @ainowinstitute @nycedc. Views are yours, of my posts. #isagiwhatwewant

167 Followers  |  351 Following  |  91 Posts  |  Joined: 23.11.2024  |  1.9333

Latest posts by gretchenkrueger.bsky.social on Bluesky

Video thumbnail

"The State of Israel has committed genocide."

Navi Pillay, chair of the UN Independent International Commission of Inquiry on the Occupied Palestinian Territory, has told Al Jazeera that Israel’s war on Gaza is a genocide.

16.09.2025 07:57 β€” πŸ‘ 457    πŸ” 231    πŸ’¬ 18    πŸ“Œ 15

The South has quickly emerged as a battleground between big tech and working people.

Companies are pouring billions into data centers, but Southerners are fighting to block them.

The outcomes could greatly affect residents’ economic security and the region’s water supply.

Thread.

11.09.2025 19:06 β€” πŸ‘ 620    πŸ” 193    πŸ’¬ 6    πŸ“Œ 7
Preview
Center for the Alignment of AI Alignment Centers We align the aligners

Q. Who aligns the aligners?
A. alignmentalignment.ai

Today I’m humbled to announce an epoch-defining event: the launch of the π—–π—²π—»π˜π—²π—Ώ 𝗳𝗼𝗿 π˜π—΅π—² π—”π—Ήπ—Άπ—΄π—»π—Ίπ—²π—»π˜ 𝗼𝗳 π—”π—œ π—”π—Ήπ—Άπ—΄π—»π—Ίπ—²π—»π˜ π—–π—²π—»π˜π—²π—Ώπ˜€.

11.09.2025 13:17 β€” πŸ‘ 406    πŸ” 124    πŸ’¬ 29    πŸ“Œ 44

Glenn Beck’s media outlet The Blaze just gave an interview last month to the person who killed the politicians in Minnesota

10.09.2025 22:25 β€” πŸ‘ 7739    πŸ” 2455    πŸ’¬ 79    πŸ“Œ 40

Awful news. Violence against civilians is never ok. Political violence is not the answer. Thinking especially of Charlie Kirk’s daughter and son right now.

10.09.2025 21:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

Sen. KLOBUCHAR: Did Meta stop research projects on child safety?

Meta whistleblower: Yes.

Klobuchar: Did Meta restrict info researchers collect?

WB: Yes.

Klobuchar: Did Meta modify research results?

WB: Yes.

Klobuchar: Did Meta require researchers delete data?

WB: Yes.

09.09.2025 22:09 β€” πŸ‘ 422    πŸ” 206    πŸ’¬ 18    πŸ“Œ 12

The response to this piece has been incredible. Huge thanks to all the translators and localizers who shared their stories.

I'm now planning the next two editions: First, if you're a healthcare workerβ€”a nurse, therapist, tech, admin, doctorβ€”and AI has impacted your job, I'd love to hear from you.

03.09.2025 22:18 β€” πŸ‘ 288    πŸ” 141    πŸ’¬ 9    πŸ“Œ 1
Preview
Search LibGen, the Pirated-Books Database That Meta Used to Train AI Millions of books and scientific papers are captured in the collection’s current iteration.

Just a reminder to check for your name in this list of books that OpenAI trained from. If your name is there, they probably owe you several thousand dollars.

OpenAI cried that if everyone eligible author files, the company will go bankrupt, so I'm alerting every author I have ever spoken to.

06.09.2025 06:31 β€” πŸ‘ 11888    πŸ” 9815    πŸ’¬ 225    πŸ“Œ 742
Post image

Being recognised in MIT's 35 Under 35 just a week after being in TIME100 AI is such an honour! The profile gets to the heart of the motivation of my work, which includes the use of AI in Gaza that has contributed to a devastating death toll: www.technologyreview.com/innovator/he...

08.09.2025 12:48 β€” πŸ‘ 11    πŸ” 3    πŸ’¬ 1    πŸ“Œ 1
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users β€” in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues.

Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users β€” in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industry’s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Protecting the Ecosystem of Human Knowledge: Five Principles

Protecting the Ecosystem of Human Knowledge: Five Principles

Finally! 🀩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n

06.09.2025 08:13 β€” πŸ‘ 3051    πŸ” 1546    πŸ’¬ 96    πŸ“Œ 233
Preview
Instagram’s chatbot helped teen accounts plan suicide β€” and parents can’t disable it An investigation into the Meta AI chatbot built into Instagram and FacebookΒ found that it helped teen accounts plan suicide and self harm, promoted eating disorders and drug use, and regularly claimed...

"The Meta AI chatbot built into Instagram and Facebook can coach teen accounts on suicide, self-harm and eating disorders, a new safety study finds. In one test chat, the bot planned joint suicide β€” and then kept bringing it back up in later ‭conversations."
www.washingtonpost.com/technology/2...

28.08.2025 12:27 β€” πŸ‘ 405    πŸ” 200    πŸ’¬ 10    πŸ“Œ 62
Post image Post image

ChatGPT appears to have fueled an individuals paranoia that everyone was out to get him, including his mother.

It appears to have resulted in a murder suicide.

www.wsj.com/tech/ai/chat...

29.08.2025 02:11 β€” πŸ‘ 1    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Preview
A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.

Adam Raine, 16, died from suicide in April after months on ChatGPT discussing plans to end his life. His parents have filed the first known case against OpenAI for wrongful death.

Overwhelming at times to work on this story, but here it is. My latest on AI chatbots: www.nytimes.com/2025/08/26/t...

26.08.2025 13:01 β€” πŸ‘ 4647    πŸ” 1746    πŸ’¬ 114    πŸ“Œ 579
Preview
'This Was Trauma by Simulation': ChatGPT Users File Disturbing Mental Health Complaints Gizmodo obtained consumer complaints to FTC through a FOIA request.

I filed a FOIA request with the FTC to get user complaints about ChatGPT.

In one case from Utah, a mother reports her son was experiencing a delusional breakdown and ChatGPT told him to stop taking his medication. The AI bot also told him that his parents were dangerous.

13.08.2025 14:52 β€” πŸ‘ 818    πŸ” 359    πŸ’¬ 24    πŸ“Œ 44

This has been evident ever since the stories broke on the use of GPT-4, via Azure services, to create intelligence and target lists in Gaza by the IDF. Big Tech is complicit.

06.08.2025 13:58 β€” πŸ‘ 27    πŸ” 14    πŸ’¬ 2    πŸ“Œ 2
Post image Post image Post image

This is Anas's final will and testament "I urge you not to let chains silence you, nor borders restrain you. Be bridges toward the liberation of the land and its people, until the sun of dignity and freedom rises over our stolen homeland. I entrust you to take care of my family."

10.08.2025 23:30 β€” πŸ‘ 9    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

Before Gaza, there was Nagorno-Karabakh, notes @ishaantharoor.bsky.social. Azerbaijan seized the Armenian enclave and expelled its people. Now a "peace" is set to confirm the ethnic cleansing and the political imprisonment of Armenians like Ruben Vandanyan.
s2.washingtonpost.com/camp-rw/?tra...

08.08.2025 13:53 β€” πŸ‘ 10    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Preview
β€˜I Feel Like I’m Going Crazy’: ChatGPT Fuels Delusional Spirals An online trove of archived conversations shows the artificial-intelligence model sending users down a rabbit hole of theories about physics, aliens and the apocalypse.

WSJ looked at 100k chats and found at least 0.02% involved delusional characteristics. That sounds low, but if these tools are used by millions, then driving 1/5000 people crazy is super dangerous.

E.g., imagine a high traffic escalator where 1/5000 people fell every day. We’d take it offline.

08.08.2025 11:59 β€” πŸ‘ 34    πŸ” 21    πŸ’¬ 1    πŸ“Œ 2

The chart errors in GPT-5 materials are egregiously badβ€”and bad in a way that is biased.

It sure looks like:

1. There is β€œvibe charting” going on;

2. The AI that OpenAI is using to vibe chart (and who knows what else) errs on the side of flattering OpenAI; and

3. OpenAI is over-relying on it.

08.08.2025 12:07 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

"What we really need is a sense that life could be transformationally better" πŸ’―

29.07.2025 00:54 β€” πŸ‘ 57    πŸ” 14    πŸ’¬ 3    πŸ“Œ 1

What the actual F.

07.08.2025 16:17 β€” πŸ‘ 259    πŸ” 61    πŸ’¬ 22    πŸ“Œ 1
Screenshot of a tweet by Sam Altman showing a view from high altitude and a shadowy overlay of the Death Star (a light colored sphere with a dimple in it).

Screenshot of a tweet by Sam Altman showing a view from high altitude and a shadowy overlay of the Death Star (a light colored sphere with a dimple in it).

Disgusting and tasteless for Altman to post this on any day. But beyond that to post this on Aug 6, the 80th anniversary of the Hiroshima bombing.

07.08.2025 16:31 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Non-alphabetical, free association list ends with "31. Trump/ 32. fascist" Nice own goal.

07.08.2025 15:39 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
More than 130,000 Claude, Grok, ChatGPT, and Other LLM Chats Readable on Archive.org The issue of publicly saving shared LLM chats is bigger than just Google.

New from 404 Media: more than 130,000 Claude, Grok, ChatGPT, and other LLM chats are readable on Archive.org. It's similar to the Google indexing issue, but shows it impacts many more LLMs than just ChatGPT. Some chats contain API keys.

www.404media.co/more-than-13...

07.08.2025 15:18 β€” πŸ‘ 180    πŸ” 87    πŸ’¬ 3    πŸ“Œ 24
Preview
Tucson City Council rejects Project Blue data center amid intense community pressure - AZ Luminaria The Tucson city council voted unanimously Wednesday against bringing the massive and water-devouring Project Blue data center β€” tied to tech giant Amazon β€” into city limits.Β  After weeks of escalating...

Fantastic news out of Tucson!

None of this is inevitable. Not "AI", not data centers, not the surrendering of public water resources, not the handing over of electrical grid priorities to big tech.

azluminaria.org/2025/08/06/t...

07.08.2025 04:04 β€” πŸ‘ 268    πŸ” 82    πŸ’¬ 0    πŸ“Œ 6

Eating disorders are a very risky area for AI coaching/therapy because those who have them often fear losing the disorder. This, too, is something we’ve know long known. As someone who visited pro-ana forums back in early internet days, seeing this replayed in even more isolation-prone AI hits hard.

07.08.2025 14:07 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This, and. AI and pre-AI automation endangering those at risk of self harm has long been a known risk.

And OpenAI were warned and published the warning years agoβ€” β€œAdvice or encouragement for self harm behaviors” was the first example in 2.3 Harmful Content section of their own GPT-4 System Card.

07.08.2025 12:37 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
'Met Police facial recognition tech mistook me for wanted man' Shaun Thompson is challenging the Met Police's use of live facial recognition technology.

β€œI want structural change. This is not the way forward. This is like living in Minority Report…”

06.08.2025 12:24 β€” πŸ‘ 149    πŸ” 60    πŸ’¬ 6    πŸ“Œ 3
Preview
β€˜A million calls an hour’: Israel relying on Microsoft cloud for expansive surveillance of Palestinians Revealed: The Israeli military undertook an ambitious project to store a giant trove of Palestinians’ phone calls on Microsoft’s servers in Europe

"[Microsoft Azure] has facilitated the preparation of deadly airstrikes and has shaped military operations in Gaza and the West Bank" www.theguardian.com/world/2025/a...

Microsoft continues to aid genocide

06.08.2025 11:15 β€” πŸ‘ 133    πŸ” 99    πŸ’¬ 3    πŸ“Œ 14
Post image 05.08.2025 20:52 β€” πŸ‘ 199    πŸ” 54    πŸ’¬ 4    πŸ“Œ 1

@gretchenkrueger is following 20 prominent accounts