"The State of Israel has committed genocide."
Navi Pillay, chair of the UN Independent International Commission of Inquiry on the Occupied Palestinian Territory, has told Al Jazeera that Israelβs war on Gaza is a genocide.
@gretchenkrueger.bsky.social
Research Fellow @BKCHarvard. Previously @openai @ainowinstitute @nycedc. Views are yours, of my posts. #isagiwhatwewant
"The State of Israel has committed genocide."
Navi Pillay, chair of the UN Independent International Commission of Inquiry on the Occupied Palestinian Territory, has told Al Jazeera that Israelβs war on Gaza is a genocide.
The South has quickly emerged as a battleground between big tech and working people.
Companies are pouring billions into data centers, but Southerners are fighting to block them.
The outcomes could greatly affect residentsβ economic security and the regionβs water supply.
Thread.
Q. Who aligns the aligners?
A. alignmentalignment.ai
Today Iβm humbled to announce an epoch-defining event: the launch of the ππ²π»ππ²πΏ π³πΌπΏ ππ΅π² ππΉπΆπ΄π»πΊπ²π»π πΌπ³ ππ ππΉπΆπ΄π»πΊπ²π»π ππ²π»ππ²πΏπ.
Glenn Beckβs media outlet The Blaze just gave an interview last month to the person who killed the politicians in Minnesota
10.09.2025 22:25 β π 7739 π 2455 π¬ 79 π 40Awful news. Violence against civilians is never ok. Political violence is not the answer. Thinking especially of Charlie Kirkβs daughter and son right now.
10.09.2025 21:43 β π 0 π 0 π¬ 0 π 0Sen. KLOBUCHAR: Did Meta stop research projects on child safety?
Meta whistleblower: Yes.
Klobuchar: Did Meta restrict info researchers collect?
WB: Yes.
Klobuchar: Did Meta modify research results?
WB: Yes.
Klobuchar: Did Meta require researchers delete data?
WB: Yes.
The response to this piece has been incredible. Huge thanks to all the translators and localizers who shared their stories.
I'm now planning the next two editions: First, if you're a healthcare workerβa nurse, therapist, tech, admin, doctorβand AI has impacted your job, I'd love to hear from you.
Just a reminder to check for your name in this list of books that OpenAI trained from. If your name is there, they probably owe you several thousand dollars.
OpenAI cried that if everyone eligible author files, the company will go bankrupt, so I'm alerting every author I have ever spoken to.
Being recognised in MIT's 35 Under 35 just a week after being in TIME100 AI is such an honour! The profile gets to the heart of the motivation of my work, which includes the use of AI in Gaza that has contributed to a devastating death toll: www.technologyreview.com/innovator/he...
08.09.2025 12:48 β π 11 π 3 π¬ 1 π 1Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users β in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industryβs marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.
Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAIβs ChatGPT and Appleβs Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).
Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.
Protecting the Ecosystem of Human Knowledge: Five Principles
Finally! π€© Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...
We unpick the tech industryβs marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
"The Meta AI chatbot built into Instagram and Facebook can coach teen accounts on suicide, self-harm and eating disorders, a new safety study finds. In one test chat, the bot planned joint suicide β and then kept bringing it back up in later βconversations."
www.washingtonpost.com/technology/2...
ChatGPT appears to have fueled an individuals paranoia that everyone was out to get him, including his mother.
It appears to have resulted in a murder suicide.
www.wsj.com/tech/ai/chat...
Adam Raine, 16, died from suicide in April after months on ChatGPT discussing plans to end his life. His parents have filed the first known case against OpenAI for wrongful death.
Overwhelming at times to work on this story, but here it is. My latest on AI chatbots: www.nytimes.com/2025/08/26/t...
I filed a FOIA request with the FTC to get user complaints about ChatGPT.
In one case from Utah, a mother reports her son was experiencing a delusional breakdown and ChatGPT told him to stop taking his medication. The AI bot also told him that his parents were dangerous.
This has been evident ever since the stories broke on the use of GPT-4, via Azure services, to create intelligence and target lists in Gaza by the IDF. Big Tech is complicit.
06.08.2025 13:58 β π 27 π 14 π¬ 2 π 2This is Anas's final will and testament "I urge you not to let chains silence you, nor borders restrain you. Be bridges toward the liberation of the land and its people, until the sun of dignity and freedom rises over our stolen homeland. I entrust you to take care of my family."
10.08.2025 23:30 β π 9 π 3 π¬ 0 π 0Before Gaza, there was Nagorno-Karabakh, notes @ishaantharoor.bsky.social. Azerbaijan seized the Armenian enclave and expelled its people. Now a "peace" is set to confirm the ethnic cleansing and the political imprisonment of Armenians like Ruben Vandanyan.
s2.washingtonpost.com/camp-rw/?tra...
WSJ looked at 100k chats and found at least 0.02% involved delusional characteristics. That sounds low, but if these tools are used by millions, then driving 1/5000 people crazy is super dangerous.
E.g., imagine a high traffic escalator where 1/5000 people fell every day. Weβd take it offline.
The chart errors in GPT-5 materials are egregiously badβand bad in a way that is biased.
It sure looks like:
1. There is βvibe chartingβ going on;
2. The AI that OpenAI is using to vibe chart (and who knows what else) errs on the side of flattering OpenAI; and
3. OpenAI is over-relying on it.
"What we really need is a sense that life could be transformationally better" π―
29.07.2025 00:54 β π 57 π 14 π¬ 3 π 1What the actual F.
07.08.2025 16:17 β π 259 π 61 π¬ 22 π 1Screenshot of a tweet by Sam Altman showing a view from high altitude and a shadowy overlay of the Death Star (a light colored sphere with a dimple in it).
Disgusting and tasteless for Altman to post this on any day. But beyond that to post this on Aug 6, the 80th anniversary of the Hiroshima bombing.
07.08.2025 16:31 β π 1 π 0 π¬ 0 π 0Non-alphabetical, free association list ends with "31. Trump/ 32. fascist" Nice own goal.
07.08.2025 15:39 β π 1 π 0 π¬ 0 π 0New from 404 Media: more than 130,000 Claude, Grok, ChatGPT, and other LLM chats are readable on Archive.org. It's similar to the Google indexing issue, but shows it impacts many more LLMs than just ChatGPT. Some chats contain API keys.
www.404media.co/more-than-13...
Fantastic news out of Tucson!
None of this is inevitable. Not "AI", not data centers, not the surrendering of public water resources, not the handing over of electrical grid priorities to big tech.
azluminaria.org/2025/08/06/t...
Eating disorders are a very risky area for AI coaching/therapy because those who have them often fear losing the disorder. This, too, is something weβve know long known. As someone who visited pro-ana forums back in early internet days, seeing this replayed in even more isolation-prone AI hits hard.
07.08.2025 14:07 β π 1 π 0 π¬ 0 π 0This, and. AI and pre-AI automation endangering those at risk of self harm has long been a known risk.
And OpenAI were warned and published the warning years agoβ βAdvice or encouragement for self harm behaviorsβ was the first example in 2.3 Harmful Content section of their own GPT-4 System Card.
βI want structural change. This is not the way forward. This is like living in Minority Reportβ¦β
06.08.2025 12:24 β π 149 π 60 π¬ 6 π 3"[Microsoft Azure] has facilitated the preparation of deadly airstrikes and has shaped military operations in Gaza and the West Bank" www.theguardian.com/world/2025/a...
Microsoft continues to aid genocide