Eryk Salvaggio's Avatar

Eryk Salvaggio

@eryk.bsky.social

Situationist Cybernetics, Artist/Researcher, anti-AI AI artist. Aim to be kind. Researcher, AI Pedagogies, metaLab (at) Harvard University. Artist in Residence @ the Machine Visual Culture Research Group, Max Planck Institute, Rome. cyberneticforests.com

11,707 Followers  |  4,482 Following  |  2,694 Posts  |  Joined: 12.04.2023  |  2.102

Latest posts by eryk.bsky.social on Bluesky

I Saved a PNG Image To A Bird
YouTube video by Benn Jordan I Saved a PNG Image To A Bird

New doc releases today!

- Ultrasonic recording of a starling that can record and playback virtually any sound

- Analyzing incredible slowed-down bird songs

- Showing you how to do this (and way more) on the cheap

youtu.be/hCQCP-5g5bo

26.07.2025 14:11 — 👍 298    🔁 82    💬 22    📌 27
"Your settings are saved: Consent, simulation, and to right to remian dead in the age of grief AI." A woman is lying down, her hands covering her face. A camera and polaroids are scattered around her head.

"Your settings are saved: Consent, simulation, and to right to remian dead in the age of grief AI." A woman is lying down, her hands covering her face. A camera and polaroids are scattered around her head.

Reposting this essay (no paywall) on grief AI. From courtroom testimony to children’s bedtime conversations, people are using simulations of the dead in ways that feel intimate, comforting, or useful. But comfort isn’t the same as consent, & simulation isn’t memory. jgcarpenter.com/blog.html?bl...

03.08.2025 19:11 — 👍 7    🔁 5    💬 0    📌 0

I am also fairly sure that this “generated text as flawed writing partner” model wouldn’t quite justify the costs or harms the industry is willing to pass on to us, so of course the frame is stuck on “hypothetical, aspirational and absolutely anti-humanist”

02.08.2025 13:22 — 👍 42    🔁 5    💬 1    📌 0

I suspect system prompts and critical literacy could create statistically generated text useful for thought. But it would need to be framed as constantly suspect and shift to expectations that LLMs are for knowledge to be tested rather than “transmitted.” That’s the opposite of what we’re seeing. …

02.08.2025 13:22 — 👍 26    🔁 3    💬 3    📌 0
Preview
Opinion | A.I. Is Not a 21st-Century Enlightenment

“We formulate the questions, drive the inquiry … and we search … for answers that reinforce what we already think we know. ChatGPT has often responded, with patently insincere flattery: ‘That’s a great question.’ It has never responded: ‘That’s the wrong question.’” www.nytimes.com/2025/08/02/o...

02.08.2025 13:22 — 👍 153    🔁 48    💬 5    📌 5
Black t shirt that says “John Lennon Broke Up Fluxus.”

Black t shirt that says “John Lennon Broke Up Fluxus.”

I have never been able to fit into this t shirt since I bought it but now I can, good thing the message is evergreen

01.08.2025 19:53 — 👍 56    🔁 6    💬 5    📌 0
Black t shirt that says “John Lennon Broke Up Fluxus.”

Black t shirt that says “John Lennon Broke Up Fluxus.”

I have never been able to fit into this t shirt since I bought it but now I can, good thing the message is evergreen

01.08.2025 19:53 — 👍 56    🔁 6    💬 5    📌 0
chart: capital expenditures, quarterly

shows hockey-stick like growth in the capex expenditures of Amazon, Microsoft, Google and meta, almost entirely on data centers

in the most recent quarter it was nearly $100 billion, collectively

chart: capital expenditures, quarterly shows hockey-stick like growth in the capex expenditures of Amazon, Microsoft, Google and meta, almost entirely on data centers in the most recent quarter it was nearly $100 billion, collectively

The AI infrastructure build-out is so gigantic that in the past 6 months, it contributed more to the growth of the U.S. economy than /all of consumer spending/

The 'magnificent 7' spent more than $100 billion on data centers and the like in the past three months *alone*

www.wsj.com/tech/ai/sili...

01.08.2025 12:19 — 👍 780    🔁 309    💬 74    📌 269
Preview
The psychology of LLM interactions: the uncanny valley and other minds Through their ability to converse in a human-like fashion, large language models (LLMs) have underscored the need to revisit our definitions of consciousness, and how we know if someone who claims ...

Pin-worthy article on “c-expressions,” defined here as phrases LLMs generate that refer to themselves and inner experiences they do not have (such as “I panicked” when explaining mistakes), framing these expressions as an “uncanny valley of language.” www.tandfonline.com/doi/full/10....

01.08.2025 13:40 — 👍 25    🔁 6    💬 1    📌 0

All of these GOATs get tanked by Twin Peaks, Mad Men, The Leftovers, Succession and Bojack Horseman.

01.08.2025 12:27 — 👍 7    🔁 0    💬 0    📌 1

Based on the prevailing definition of AI as “automating cognitive labor,” switchboard operators were phased out by “AI” decades ago

31.07.2025 02:11 — 👍 9    🔁 0    💬 2    📌 0

Did Philomena Cunk write this

29.07.2025 02:27 — 👍 110    🔁 23    💬 7    📌 0
Preview
AI Could Never Be ‘Woke’ | TechPolicy.Press The Trump administration’s plan, in targeting “ideological bias” and “social engineering agendas” in AI, ultimately enforces them, writes Eryk Salvaggio.

In targeting “ideological bias” and “social engineering agendas," the Trump administration’s AI Action Plan ultimately enforces them, writes Tech Policy Press fellow Eryk Salvaggio (@eryk.bsky.social). www.techpolicy.press/sorry-donald...

27.07.2025 15:15 — 👍 26    🔁 9    💬 0    📌 0

Remember that AI Coup I was talking about? It didn’t go away with Elon. This is the awful plan to automate government with slop engines — because the ethos is that anyone who needs government support doesn’t deserve to have it. ⬇️

26.07.2025 14:27 — 👍 70    🔁 22    💬 1    📌 1
Preview
DOGE builds AI tool to cut 50 percent of federal regulations The U.S. DOGE Service is using a new AI tool to eliminate federal regulations, aiming to cut 50 percent of rules by the first anniversary of President Donald Trump’s inauguration.

DOGE "is using a new artificial intelligence tool to slash federal regulations, with the goal of eliminating half of Washington’s regulatory mandates by the first anniversary of President Donald Trump’s inauguration, according to documents obtained by The Washington Post" and four govt officials.

26.07.2025 13:56 — 👍 200    🔁 147    💬 14    📌 59
Preview
AI Could Never Be ‘Woke’ | TechPolicy.Press The Trump administration’s plan, in targeting “ideological bias” and “social engineering agendas” in AI, ultimately enforces them, writes Eryk Salvaggio.

"Objectivity is politically defined. Data for AI was never collected or interpreted objectively, and language is socially constructed.

"The order is not about 'truthfulness and accuracy,' but an exercise in power that locks ideology into place."

from @eryk.bsky.social

25.07.2025 18:52 — 👍 10    🔁 1    💬 0    📌 0
Preview
AI Could Never Be ‘Woke’ | TechPolicy.Press The Trump administration’s plan, in targeting “ideological bias” and “social engineering agendas” in AI, ultimately enforces them, writes Eryk Salvaggio.

There could never be a 'Woke AI.' On a technical level, these are machines trained to identify and reproduce statistical biases in uncurated datasets. They are constrained only by the instructions written by the companies building them. The order just demands AI companies do it in a specific way. ⬇️

25.07.2025 14:38 — 👍 38    🔁 14    💬 4    📌 0
Headline from NY Post: NYC kids banned from using phones, smartwatches, tablets in public schools starting this fall

Headline from NY Post: NYC kids banned from using phones, smartwatches, tablets in public schools starting this fall

Nearly two decades after my principal said we couldn't do anything about kids' phones, so we had to figure out how to use them in our lessons, and after two decades of schools failing to "teach kids how to use them responsibly," phones are being banned and we're getting the same message about AI.

25.07.2025 10:30 — 👍 63    🔁 19    💬 1    📌 1
Preview
Unpacking Trump’s AI Action Plan: Gutting Rules and Speeding Roll-Out | TechPolicy.Press Trump's action plan on artificial intelligence calls for slashing regulations and easing rollout of the technology, Cristiano Lima-Strong reports.

On Wednesday, the Trump administration unveiled its AI Action Plan. Looking to make sense of what it means for the future of AI in the US? Tech Policy Press has you covered (1/4)

1. @viacristiano.bsky.social unpacks the AI Action Plan and what it means:

25.07.2025 14:35 — 👍 7    🔁 4    💬 3    📌 0
Preview
Unpacking Trump’s AI Action Plan: Gutting Rules and Speeding Roll-Out | TechPolicy.Press Trump's action plan on artificial intelligence calls for slashing regulations and easing rollout of the technology, Cristiano Lima-Strong reports.

More on the AI Action Plan at @techpolicypress.bsky.social: "[Trump] directs agencies to only procure “unbiased” AI products that adhere to “ideological neutrality.” The order offers sparse detail on how this will be evaluated." www.techpolicy.press/unpacking-tr...

25.07.2025 14:42 — 👍 2    🔁 1    💬 0    📌 0
Preview
AI Could Never Be ‘Woke’ | TechPolicy.Press The Trump administration’s plan, in targeting “ideological bias” and “social engineering agendas” in AI, ultimately enforces them, writes Eryk Salvaggio.

There could never be a 'Woke AI.' On a technical level, these are machines trained to identify and reproduce statistical biases in uncurated datasets. They are constrained only by the instructions written by the companies building them. The order just demands AI companies do it in a specific way. ⬇️

25.07.2025 14:38 — 👍 38    🔁 14    💬 4    📌 0
Preview
The White House orders tech companies to make AI bigoted again War is peace, bias is objectivity, speech is censorship.

“LLMs produce incontrovertibly incorrect information with clear potential for real-world harm; they can falsely identify innocent people as criminals, misidentify poisonous mushrooms, and reinforce paranoid delusions. This order has nothing to do with any of that.”

24.07.2025 23:01 — 👍 75    🔁 22    💬 1    📌 2
Preview
AI Could Never Be ‘Woke’ | TechPolicy.Press The Trump administration’s plan, in targeting “ideological bias” and “social engineering agendas” in AI, ultimately enforces them, writes Eryk Salvaggio.

In targeting “ideological bias” and “social engineering agendas," the Trump administration’s AI Action Plan ultimately enforces them, writes Tech Policy Press fellow Eryk Salvaggio.

24.07.2025 15:41 — 👍 48    🔁 18    💬 5    📌 4

I know—they drive people into desperation, then they will target them for their desperation. But if you point out that driving people into desperation has no social benefit, makes society worse, is possible to avoid, and also grossly unjust they will claim your views are extreme.

24.07.2025 23:47 — 👍 102    🔁 14    💬 1    📌 1

"Bias shapes what we see and what we make invisible. Objectivity is politically defined. Data for AI was never collected or interpreted objectively, and language is socially constructed [...] there is no such thing as an unbiased AI system." 👏👏

24.07.2025 17:47 — 👍 17    🔁 9    💬 1    📌 0

"...the plan calls for the Department of Education to work on 'AI skill development as a core objective of relevant education and workforce funding streams,' again emphasizing the technical knowledge required for accelerating AI adoption rather than critical literacies." Great by @eryk.bsky.social

24.07.2025 18:03 — 👍 12    🔁 7    💬 0    📌 0

We should also add that the guaranteed failures of Large Language Models in critical decisions about government services ensures that the people that rely on government services can be harmed without accountability.

23.07.2025 20:02 — 👍 127    🔁 37    💬 3    📌 2
Preview
How big tech is force-feeding us AI Plus, OpenAI's absurd listening tour, top AI scientists say AI is evolving beyond our control, Facebook is putting data centers in tents, and the AI bubble question — answered?

Big tech promotes the story that since ChatGPT burst onto the scene, AI has been so popular it's rushed to meet the demand. The reality is different. A study by design scholars shows tech co's have had to push AI on its users with a variety of intrusive tactics.

How big tech is force-feeding us AI:

23.07.2025 16:01 — 👍 2195    🔁 968    💬 42    📌 106
Video thumbnail

Reporter: The FDA has a new AI tool that's intended to speed up drug approvals. But several FDA employees say the new AI helper is making up studies that do not exist. One FDA employee telling us, 'Anything that you don't have time to double check is unreliable. It hallucinates confidently'

23.07.2025 17:24 — 👍 4411    🔁 1465    💬 170    📌 698

Good clarification, though the point holds that trusting AI with healthcare doesn’t need to be rushed.

23.07.2025 18:20 — 👍 0    🔁 0    💬 1    📌 0

@eryk is following 20 prominent accounts