Margaret Ray's Avatar

Margaret Ray

@mbrrray.bsky.social

Poet 🥸 Author of: GOOD GRIEF, THE GROUND (poems, BOA Editions‘23) & a Poetry Society of America Chapbook: SUPERSTITIONS OF THE MID-ATLANTIC she/her https://www.margaretbray.com/ https://bookshop.org/a/92150/9781950774845

1,283 Followers  |  580 Following  |  120 Posts  |  Joined: 24.07.2023  |  2.1313

Latest posts by mbrrray.bsky.social on Bluesky

Preview
Do OpenAI’s multibillion-dollar deals mean exuberance has got out of hand? Some market watchers are concerned by the circular nature of deals with chip makers Nvidia and AMD

Does an industry that burns cash to boil lakes to erroneously rephrase the Internet deserve a market valuation higher than the GDP of 97% of countries on Earth?

What do we think? www.theguardian.com/business/202...

08.10.2025 11:28 — 👍 15    🔁 2    💬 0    📌 1

Ireland 🥹💚🤍🧡

07.10.2025 10:31 — 👍 17    🔁 2    💬 0    📌 1
Post image

2 new poems of mine are in in this rad project (here’s one) style.atwhatcost.me#margaret-ray

06.10.2025 18:37 — 👍 0    🔁 0    💬 0    📌 0
Preview
Sam Altman’s AI empire will devour as much power as New York City and San Diego combined. Experts say it’s ‘scary’ | Fortune Andrew Chien told Fortune he’s been a computer scientist for 40 years but we’re close to “some seminal moments for how we think about AI and its impact on society.”

OpenAI’s planned data centers will use more power than New York City & San Diego use at their peak consumption, combined. More power than the entire nations of Switzerland & Portugal combined. And San Altman’s buddies in this administration have been cutting all of our energy infrastructure upgrades

06.10.2025 16:51 — 👍 362    🔁 228    💬 22    📌 69
Video thumbnail

Dr. Jane Goodall filmed an interview with Netflix in March 2025 that she understood would only be released after her death.

05.10.2025 09:08 — 👍 37091    🔁 16726    💬 799    📌 2360

If any other president in all of history said the military should use American cities as a training ground he would be removed from office that same day. The hardest thing to tolerate in all this is how relatively silent elected democrats are. It’s ridiculous.

30.09.2025 14:57 — 👍 12002    🔁 3159    💬 246    📌 169

Wow! Brilliant for writers!

29.09.2025 18:15 — 👍 482    🔁 209    💬 7    📌 3
Post image Post image

Come write poems w me in this online workshop w Poetry Society of New York next week? Thurs Oct 2, 7pm www.eventbrite.com/e/psny-virtu...

27.09.2025 14:13 — 👍 1    🔁 1    💬 0    📌 0
Preview
The Luddite Renaissance is in full swing This fall, the new luddites are rising

in education, creative industries, caring professions, and LABOR ORGANIZING, people are doing things, there's a movement

against-a-i.com

www.bloodinthemachine.com/p/the-luddit...

21.09.2025 14:56 — 👍 290    🔁 94    💬 4    📌 8

Thread 🧵 ⬇️

20.09.2025 21:17 — 👍 4    🔁 0    💬 0    📌 0
Against the Uncritical Adoption of 'AI' Technologies in Academia Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these col...

"We and our students can choose not to use these technologies. Just like we have banned smoking from public spaces, we could foster that process of banning both by choosing to individually quit smoking and by demanding regulation of the tobacco industry.”

zenodo.org/records/1706...

37/🧵

19.09.2025 23:49 — 👍 25    🔁 10    💬 1    📌 1

Much like offloading onto LLMs, men are more likely to feel OK offloading generally. Women do not & sadly are used to doing this work. Men are formed by patriarchy to demand, expect, and tolerate this. This is in pure contrast to all the women who contact me to say they will work hard to fight AI.

20.09.2025 07:22 — 👍 74    🔁 26    💬 1    📌 1
Preview
Wildfire smoke is killing Americans. A new study quantifies how much More intense future wildfires, fueled by further climate change, could lead to 70,000 deaths from smoke exposure a year, according to a new study.

More intense future wildfires, fueled by further climate change, could lead to 70,000 deaths from smoke exposure a year, according to a new study.

19.09.2025 12:05 — 👍 254    🔁 82    💬 8    📌 7

If you see people having their windows smashed in, thrown on the pavement, and ripped away from their children and your thought is to ponder legality, I think you should reconsider what’s important in this moment.

14.09.2025 18:54 — 👍 1031    🔁 265    💬 15    📌 4

I love this framing. For too long it’s felt like morality was framed like a currency. But it should be thought of as a condition, a fitness.

13.09.2025 15:29 — 👍 136    🔁 27    💬 0    📌 1

I just heard someone use the phrasing “mental and MORAL health” and wow, I’m going to start using that too. I want to take care of and improve my moral health.

13.09.2025 13:30 — 👍 634    🔁 107    💬 16    📌 10

"school shooting industry"

09.09.2025 14:50 — 👍 5942    🔁 2155    💬 213    📌 117
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues.

Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industry’s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Protecting the Ecosystem of Human Knowledge: Five Principles

Protecting the Ecosystem of Human Knowledge: Five Principles

Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n

06.09.2025 08:13 — 👍 3052    🔁 1548    💬 96    📌 233
Preview
The Game « Kenyon Review Blog There's a popular conception, widely held by gamers themselves, that games don't mean anything. Don't take them so seriously. They're just for fun.

Also maybe Kieth Wilson’s new book: Games for Children? Just out or not quite out so haven’t gotten to read it yet…
Here’s an essay he wrote in 2017 kenyonreview.org/2017/02/the-...

01.09.2025 22:55 — 👍 1    🔁 0    💬 0    📌 0

If All The World and Love Were Young is the book

01.09.2025 22:51 — 👍 0    🔁 0    💬 0    📌 0
Preview
Stephen Sexton: ‘For me, death and Super Mario have always been connected’ The Belfast poet on the poetry and analogies for life in video games

Stephen Sexton has a book orbiting Mario: www.irishtimes.com/culture/book...

01.09.2025 22:50 — 👍 0    🔁 0    💬 1    📌 0

🔔Anyone know an AI “expert” or a journalist whose beat has been AI for years and who ISN’T just a shill for the AI companies’ products with stars in their eyes? Someone critical and realistic?
Looking around for outside experts I could suggest our school bring in to talk to our faculty & students…

31.08.2025 14:40 — 👍 3    🔁 2    💬 3    📌 1

I also really appreciate this advice from @emilymbender.bsky.social and @karenhao.bsky.social, which tracks with my own experience teaching high school students and college students and talking with my own kids. Emphasize environmental costs and - once more - that GenAI is *not* inevitable.

30.08.2025 19:22 — 👍 95    🔁 35    💬 2    📌 0

Ooo thank you for this tip

31.08.2025 17:36 — 👍 2    🔁 0    💬 0    📌 0

Help me find the Ed Yong of AI journalism?

31.08.2025 15:05 — 👍 0    🔁 0    💬 0    📌 0

Help me find the @edyong209.bsky.social of AI journalism?

31.08.2025 14:42 — 👍 0    🔁 0    💬 1    📌 0

🔔Anyone know an AI “expert” or a journalist whose beat has been AI for years and who ISN’T just a shill for the AI companies’ products with stars in their eyes? Someone critical and realistic?
Looking around for outside experts I could suggest our school bring in to talk to our faculty & students…

31.08.2025 14:40 — 👍 3    🔁 2    💬 3    📌 1
We built a calculator that doesn't work, but don't worry, it's also a plagiarism machine that will tell you to kill yourself. It runs on the world's oceans and costs 10 trillion dollars.

We built a calculator that doesn't work, but don't worry, it's also a plagiarism machine that will tell you to kill yourself. It runs on the world's oceans and costs 10 trillion dollars.

My students, who attend a working-class rust-belt college, openly talk abt how much they hate AI & are afraid of its consequences. I wonder how much of the oft-reported student enthusiasm for the tech is the merely result of the NYT’s Ivy League bias

30.08.2025 17:58 — 👍 4053    🔁 828    💬 60    📌 41

One thing Sensible Centrists refuse to acknowledge - and this was true during covid when people were also saying this stuff explicitly - is these people are eugnenicists who welcome a good plague to get rid of the old, botched, and bungled

30.08.2025 12:15 — 👍 2262    🔁 639    💬 29    📌 15
Wolf Confessions

You are what you eat
Which is why I am your
grandmother

Wolf Confessions You are what you eat Which is why I am your grandmother

found this poem in a 2018 file in my notes app

29.08.2025 22:16 — 👍 311    🔁 41    💬 15    📌 5

@mbrrray is following 19 prominent accounts