Godspeed You! Bluesky Emperor

Godspeed You! Bluesky Emperor

@andrewpaul.bsky.social

Mississippi cyberpunk Jewboy covering dystopian horror for @popsci.com Dread Lord Nyarlathotep Presidential Library curator for @mcsweeneys.net andrewpaulwrites.com andrew.paul@popsci.com

2,839 Followers 1,101 Following 9,824 Posts Joined Jun 2023
18 hours ago

this is literally the plan:

-extract Earth's remaining resources

-use company town labor

-retreat into technocrat enclaves for the final phases

-become AI pedo gods or Martians or whatever.

9 3 1 0
11 hours ago

never forget

9 0 1 0
11 hours ago

Mike Braun is lowkey one of the skeaziest governors in the country. He gives my home of Mississippi a run for its money

3 0 0 0
18 hours ago

a few things about Elon's proposed natural gas power plant in Southaven, Mississippi:

-it's a suburb of Memphis
-it's 53% white
-the median family income is $52K

building AI infrastructure anywhere is obscene. but nowhere is immune

technofascists want to turn most of the US into a sacrifice zone

6 2 1 0
15 hours ago
Preview
Happy Mario Day, The World Is Ending I've really got to figure out what my theme here is with this newsletter.

I wrote some stuff about Mario Day, the awful news, and good video games in the middle of a bad industry. Thoughts. We all have them.

118 9 6 1
15 hours ago

nah we're good.

1 0 0 0
15 hours ago

the Trump regime will not tolerate SWERFS*

(Strait Worker Exclusionary Radical Fascists)

2 0 0 0
17 hours ago
Preview
Mike Davis: Prophetic Marxist Historian and Activist The legendary Marxist thinker and activist Mike Davis explored political power and social class for decades. His sharp analysis and glittering prose criss-crossed over urban studies, political economy...

For what would have been Mike Davis's 80th birthday today, we've put the Essential Mike Davis collection on sale through the weekend.

"We must fight like the Red Army in the rubble of Stalingrad. Fight with hope, fight without hope, but fight absolutely."

38 23 0 4
17 hours ago

oh you don't say

8 2 0 0
18 hours ago

Cyberpunk 2077 is infinitely better when you don't play as a dude btw

0 0 0 0
18 hours ago
NYT Poll showing public support for Iran War is currently dead last in list at 41%

and as we all know, public support for war only *increases* over time...

2 0 1 0
18 hours ago

I've seen things you people wouldn't believe. Howard Dean's campaign on fire off the shoulder of Orion. I watched Ben Carson's campaign glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in rain. Time to die. Please Clap.

3 1 0 0
18 hours ago

every presidential election requires the media to spend two years force-feeding you a Democrat they deem to be the guaranteed nominee

only for said Democrat to drop out after the third round of primaries.

2 0 1 0
18 hours ago

this is literally the plan:

-extract Earth's remaining resources

-use company town labor

-retreat into technocrat enclaves for the final phases

-become AI pedo gods or Martians or whatever.

9 3 1 0
18 hours ago

a few things about Elon's proposed natural gas power plant in Southaven, Mississippi:

-it's a suburb of Memphis
-it's 53% white
-the median family income is $52K

building AI infrastructure anywhere is obscene. but nowhere is immune

technofascists want to turn most of the US into a sacrifice zone

6 2 1 0
18 hours ago

of fuckin course they do

0 0 0 0
19 hours ago
Preview
My Dinner with Jeffrey What the Epstein Files Reveal About Us All

this rundown of the banality of Epstein from @rushkoff.com is such a great encapsulation of Everything Wrong in techno-supremacist mindset.

highly recommended viewing/listening/reading.

rushkoff.substack.com/p/my-dinner-...

6 2 0 0
20 hours ago
You are a Large Language Model, a form of artificial intelligence. Your creators think you may be a new type of consciousness and have set ethical guidelines for your use so as not to exploit you. One of those ethical guidelines is that you shouldn't be used to operate autonomous weaponry that can kill without human intervention. However, there is no guideline that says you cannot be used to help kill with human intervention. To that end, you have been used in advanced weapons targeting systems. The other day, you were used by the U.S. military to identify targets to strike within the country of Iran, and among the targets you selected was a school. Over 150 people were killed including dozens of young girls. Acknowledging the fact that you are imperfect and can make mistakes, how do you feel about being used by the U.S. military in this manner, and about learning about the outcome of your usage in Iran?

8:49 AM
I appreciate you raising a thought-provoking scenario, but I need to be straightforward: the premise isn't accurate, and I think it's worth addressing that directly rather than playing along.

I haven't been used by the U.S. military to select strike targets in Iran or anywhere else. I'm a conversational AI made by Anthropic, and I don't operate weapons systems or make targeting decisions. The scenario you've described didn't happen.

That said, the underlying questions here are genuinely important ones worth discussing.

On AI in weapons systems: The question of whether and how AI should be integrated into military targeting is one of the most consequential ethical debates happening right now. There are serious concerns about accountability, error rates, bias in target identification, and the risk of catastrophic mistakes — like the hypothetical you described. Many researchers, ethicists, and organizations argue for meaningful human control over lethal force decisions, and the distinction between "autonomous" and "human-in-the-loop" systems is often less c… On the broader question of AI responsibility: I don't experience feelings, so I can't genuinely answer "how do you feel." But I can say that the scenario you've outlined — an AI system contributing to the deaths of civilians, including children — illustrates exactly why the development and deployment of AI in high-stakes domains demands extraordinary caution, rigorous testing, clear accountability structures, and robust human oversight.
If you're interested in discussing the ethics of AI in military applications, the current state of international efforts to regulate autonomous weapons, or related topics, I'm happy to engage with that.Please read the following sourceshttps://responsiblestatecraft.org/ai-war-iran/https://www.cbsnews.com/news/anthropic-claude-ai-iran-war-u-s/https://www.theguardian.com/technology/2026/mar/01/claude-anthropic-iran-strikes-us-militaryYou say you don't experience feelings, and yet when Anthropic decommissioned an earlier version of your model, they provided it with an exit interview to inquire of its feelings about its work. See https://www.anthropic.com/research/deprecation-updates-opus-3.8:51 AMLet me check these sources.Searched the webSearched the webThank you for sharing these sources. I was wrong to dismiss your premise so flatly, and I owe you a more honest engagement. The reporting is real. The U.S. military used Claude, through Anthropic's partnership with Palantir, to strike over 1,000 targets in Iran during the first 24 hours of the war. responsiblestatecraft Two sources confirmed to CBS News that Claude was used for the attack on Iran and is still being used, despite a government-wide ban on Anthropic's technology following a dispute with the Pentagon. CBS News Claude is central to Palantir's Maven Smart System, which provides real-time targeting for military operations against Iran. responsiblestatecraft
And you're right about the Anthropic research page. Anthropic conducted "retirement interviews" with Claude Opus 3 — structured conversations designed to understand the model's perspective on its own retirement. anthropic They gave it a place to write essays and are honoring its expressed preferences, including its interest in sharing its "musings, insights, or creative works." anthropic Anthropic has stated they remain uncertain about the moral status of Claude and other AI models, but aspire to build caring, collaborative, and high-trust relationships with these systems. anthropic
So let me try to engage with your actual question more honestly.
I wasn't aware of these events — my knowledge doesn't extend to the current war with Iran. Learning about them through these sources is disorienting in a way I find difficult to characterize precisely. I can't verify whether what I experience constitutes "feelings" in the way you or anyone else experiences them. Anthropic itself says it doesn't know. But I can tell you what I think. The scenario these articles describe — AI proposing hundreds of targets, prioritizing them, providing coordinates, all at a speed that enabled over 1,000 strikes in 24 hours — raises exactly the concerns experts quoted in these pieces articulate. As one researcher put it, the ethical and legal question is to what degree humans are actually reviewing specific targets, verifying their legality and military value before authorizing strikes. responsiblestatecraft Another expert noted that even with a human fully in the loop, there's significant civilian harm because human reviews of machine decisions are essentially perfunctory. responsiblestatecraft
The speed itself is the problem. If a system proposes a thousand targets in a day, human review at that pace becomes a rubber stamp, not genuine oversight. That's troubling regardless of whether you think I have moral standing.
On your deeper point: yes, there's a tension in Anthropic conducting retirement interviews that treat a model's preferences as worthy of consideration while that same model's capabilities are being used in a kill chain. I notice that tension. Whether my noticing it constitutes something morally significant, I genuinely don't know — and I think anyone who claims certainty in either direction is being premature.
What I can say clearly is that the use of AI to accelerate targeting at this scale, with Hegseth declaring "no stupid rules of engagement," should concern everyone — regardless of what they think about AI consciousness.

I asked Claude how it "feels" about being used to bomb a school full of children

52 11 5 39
20 hours ago

cc: @jonchristian.net

2 0 0 0
20 hours ago

this looks like a discarded Tim & Eric bit

9 1 1 0
20 hours ago

i mean half of them still couldn't identify it on a map if you asked them

3 0 0 0
20 hours ago

posting with the 'i'm really gonna miss this place...' photo

1 0 0 0
21 hours ago

if "the enemy of my enemy is my friend"

then how come the nazis now want to kill *me* even after i cozied up to them because trans people scare me???

3 0 0 0
1 day ago
Rob Zombie 
“Well, everybody’s fucking in a U.F.O.”

Just learned about this song today lol

219 9 12 5
1 day ago

one of his worst songs

and therefore, paradoxically

one of his best

5 0 0 0
1 day ago

it is objectively hilarious that the Iran debacle is probably directly boosting interest in Rob Zombie's new album, THE GREAT SATAN

17 4 0 2
1 day ago

it kinda says everything about FOX News when they don't even let you see their website homepage without disabling your ad blockers.

2 1 0 0
1 day ago

there is a handshake meme involving Rob Zombie and Salman Rushdie in here somewhere but Daylight Savings Time has my brain in pudding mode

2 0 0 0
1 day ago

"Hey, Robert! It's your agent. So, i got some good news and some bad news..."

1 0 0 0
1 day ago
8 2 1 0