Alondra Nelson's Avatar

Alondra Nelson

@alondra.bsky.social

Scholar, author, policy advisor alondranelson.com Science, Technology, and Social Values Lab https://www.ias.edu/stsv-lab

45,842 Followers  |  1,910 Following  |  1,135 Posts  |  Joined: 02.05.2023
Posts Following

Posts by Alondra Nelson (@alondra.bsky.social)

When you start a war with a country of 90 million but without a compelling or agreed-upon explanation as to why

03.03.2026 02:47 β€” πŸ‘ 1053    πŸ” 160    πŸ’¬ 28    πŸ“Œ 5

@bradleyonishi.bsky.social @profsamperry.bsky.social

03.03.2026 03:10 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

β€œMany of their commanders are especially delighted with how graphic this battle will be zeroing in on how bloody all of this must become in order to fulfill and be in 100% accordance with fundamentalist Christian end of the world eschatology.”

03.03.2026 03:09 β€” πŸ‘ 436    πŸ” 211    πŸ’¬ 29    πŸ“Œ 36
Preview
Trading on Violence Insider trading on prediction markets is turning into a dog-bites-man type of story.

On Polymarket, transactions are visible but identities are not. This invites not only insider trading but also, as recent events suggest, potential betting directly on & profiting from geopolitical violence, as CASBS fellow @rajivsethi.bsky.social explains

➑️ rajivsethi.substack.com/p/trading-on...

03.03.2026 00:05 β€” πŸ‘ 5    πŸ” 9    πŸ’¬ 0    πŸ“Œ 0
Using foundation models in national security contexts may introduce unique concerns threatening human rights. For example, a government’s ability to train models on citizens’ data obtained through commercial data brokers that would otherwise need a warrant, court order, or subpoena to obtain may
allow governments to further exercise coercive powers that are automated through AI decision-making [6].
Such use may subvert due process, exacerbated when inaccurate outputs inflict unjust harms on civilians.
Appropriate interventions may include the extension of data minimization principles to include purpose
limitations on the collection, processing, and transfer of personal data to third parties for intelligence
purposes.

Using foundation models in national security contexts may introduce unique concerns threatening human rights. For example, a government’s ability to train models on citizens’ data obtained through commercial data brokers that would otherwise need a warrant, court order, or subpoena to obtain may allow governments to further exercise coercive powers that are automated through AI decision-making [6]. Such use may subvert due process, exacerbated when inaccurate outputs inflict unjust harms on civilians. Appropriate interventions may include the extension of data minimization principles to include purpose limitations on the collection, processing, and transfer of personal data to third parties for intelligence purposes.

The Atlantic notes how the Pentagon wants to "analyze bulk data collected from Americans." From our "Mind the Gap" paper 2024, a snippet I have come back to what seems like dozens of time at this point.
www.theatlantic.com/technology/2...

02.03.2026 16:18 β€” πŸ‘ 31    πŸ” 19    πŸ’¬ 2    πŸ“Œ 0
Preview
U.S. Races to Accomplish Iran Mission Before Munitions Run Out President Trump says the Iran campaign might last a week or longer, but dwindling stockpiles could limit his options.

i'm no fancy military strategist but "we didn't really plan so we're going to run out of munitions soon" seems not great

02.03.2026 01:19 β€” πŸ‘ 382    πŸ” 108    πŸ’¬ 23    πŸ“Œ 36

Forgot about Bahrain.

But while we're at it, add another one to this list: "Board of Peace" member Cyprus.

Al-Jazeera is reporting that a UK base in Cyprus was just hit by a drone strike. The UK had just authorized the US to use it to strike Iranian targets.

02.03.2026 03:30 β€” πŸ‘ 225    πŸ” 93    πŸ’¬ 6    πŸ“Œ 3
Preview
Ursula K. Le Guin β€” Nominate a Book for the Ursula K. Le Guin Prize for Fiction

It's time! Nominations are now open for the 2026 Ursula K. Le Guin Prize for Fiction, which will be given to a work of imaginative fiction, published in 2025, that reflects the concepts and ideas that were central to Ursula’s own work.

01.03.2026 15:33 β€” πŸ‘ 554    πŸ” 244    πŸ’¬ 3    πŸ“Œ 19
Gift of Mrs. David M. Levy

Gift of Mrs. David M. Levy

Jacob Lawrence, Housing for the Negroes was a very difficult problem, 1940-41
https://botfrens.com/collections/14377/contents/1134645

01.03.2026 23:12 β€” πŸ‘ 132    πŸ” 28    πŸ’¬ 0    πŸ“Œ 0

Tech execs provide their kids with liberal arts education to ensure their futuresβ€”-all while many state schools are becoming all STEM. www.wsj.com/lifestyle/ca...

01.03.2026 16:13 β€” πŸ‘ 219    πŸ” 96    πŸ’¬ 5    πŸ“Œ 11

It's getting to be you can't even profit off war deaths on the insider trading app anymore

01.03.2026 17:31 β€” πŸ‘ 413    πŸ” 80    πŸ’¬ 10    πŸ“Œ 1
Preview
UNC-CH Will β€˜Scrap’ New Recording Policy, Chancellor Says The move comes less than three weeks after the controversial rules were enacted.

Congrats to @ncaaup.bsky.social & all who fought against this surveillance policy that would've allowed admin to hijack microphones in the classroom for secret recordings.

This move would've chilled classroom discussion & suppressed students' willingness to ask questions & take intellectual risks.

01.03.2026 01:33 β€” πŸ‘ 530    πŸ” 204    πŸ’¬ 7    πŸ“Œ 11
Post image

Six suspected insiders made $1.2M betting on a US strike on Iran

these wallets
β€’ were funded in the last 24h
β€’ specifically bet for February 28
β€’ bought "yes" hours before the strikes

01.03.2026 13:34 β€” πŸ‘ 140    πŸ” 79    πŸ’¬ 12    πŸ“Œ 21
Preview
U.S. Strikes in Middle East Use Anthropic, Hours After Trump Ban Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic, President Trump launched a major air attack in Iran with the hel...

Claude goes to war in Iran: "Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic, President Trump launched a major air attack in Iran with the help of those very same tools."

01.03.2026 11:47 β€” πŸ‘ 106    πŸ” 67    πŸ’¬ 3    πŸ“Œ 9

OpenAI posted the terms of the deal. Reveals that it absolutely does allow for domestic surveillance. EO 12333 is how the NSA hides its domestic surveillance by capturing communications by tapping into lines *outside the US* even if it contains info from/on US persons.

openai.com/index/our-ag...

01.03.2026 05:20 β€” πŸ‘ 2636    πŸ” 1159    πŸ’¬ 31    πŸ“Œ 73
User Chris: What was the core difference why you think the DoW accepted OpenAI but not Anthropic

Sam Altman: 
I can't speak for them, but to speculate with the best understanding of the situation.

*First, I saw reporting that they were extremely close on a deal, and for much of the time both sides really wanted to reach one. I have seen what happens in tense negotiations when things get stressed and deteriorate super fast, and I could believe that was a large part of what happened here.

*We believe in a layered approach to safety--building a safety stack, deploying FDEs and having our safety and alignment researcher involved, deploying via cloud, working directly with the DoW. Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with. We feel that it it's very important to build safe system, and although documents are also important, I'd clearly rather rely on technical safeguards if I only had to pick one.

*We and the DoW got comfortable with the contractual language, but I can understand other people would have a different opinion here.

*I think Anthropic may have wanted more operational control than we did

User Chris: What was the core difference why you think the DoW accepted OpenAI but not Anthropic Sam Altman: I can't speak for them, but to speculate with the best understanding of the situation. *First, I saw reporting that they were extremely close on a deal, and for much of the time both sides really wanted to reach one. I have seen what happens in tense negotiations when things get stressed and deteriorate super fast, and I could believe that was a large part of what happened here. *We believe in a layered approach to safety--building a safety stack, deploying FDEs and having our safety and alignment researcher involved, deploying via cloud, working directly with the DoW. Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with. We feel that it it's very important to build safe system, and although documents are also important, I'd clearly rather rely on technical safeguards if I only had to pick one. *We and the DoW got comfortable with the contractual language, but I can understand other people would have a different opinion here. *I think Anthropic may have wanted more operational control than we did

I saw some folks asking what the difference was between what OpenAI signed with the DoD and what Anthropic said they wanted, and Sam more or less admits here the key point: OpenAI's deal requires them to trust the NSA. Anthropic's contract had real safeguards.

01.03.2026 04:38 β€” πŸ‘ 2418    πŸ” 600    πŸ’¬ 26    πŸ“Œ 50
Preview
Mamdani Is Naming New York’s Next Culture Czar

We continue to build a team worthy of New Yorkers and the bold ambitions of the Mamdani administration. The latest addition is Diya Vij. Together we will make sure that art & artists are celebrated, culture and community are valued, and all of it creates more economic justice. Welcome aboard, Diya!

01.03.2026 00:21 β€” πŸ‘ 30    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Screenshot of New York Times article: β€œOpenAI Reaches A.I. Agreement With Defense Dept. After Anthropic Clash”

Screenshot of New York Times article: β€œOpenAI Reaches A.I. Agreement With Defense Dept. After Anthropic Clash”

Screenshot of text from article: β€œUnder the deal, OpenAI agreed to let the Pentagon use its A.I. systems for any lawful purpose, a term required by the Pentagon. But OpenAI also said it had found a way to ensure that its technologies would adhere to its safety principles by installing specific technical guardrails on its systems.”

Screenshot of text from article: β€œUnder the deal, OpenAI agreed to let the Pentagon use its A.I. systems for any lawful purpose, a term required by the Pentagon. But OpenAI also said it had found a way to ensure that its technologies would adhere to its safety principles by installing specific technical guardrails on its systems.”

My major takeaway from the last year of reporting on generative AI chatbots is that safety guardrails can fail when conversations run long and that everyone who works in this space knows that

www.nytimes.com/2026/02/27/t...

28.02.2026 22:35 β€” πŸ‘ 176    πŸ” 47    πŸ’¬ 8    πŸ“Œ 1
Video thumbnail

UN Secretary General Antonio Guterres: β€œMilitary action carries the risk of igniting a chain of events that no one can control in the most volatile region of the world"

28.02.2026 21:44 β€” πŸ‘ 12616    πŸ” 3821    πŸ’¬ 427    πŸ“Œ 192

Great 🧡 with many links to what political science research & history can tell us about FIRCs:

28.02.2026 21:59 β€” πŸ‘ 79    πŸ” 40    πŸ’¬ 0    πŸ“Œ 1
Preview
The Jazz Pictures the FBI Silenced Fearing for her safety, Lisette Model buried her photos of artists like Billie Holiday and Louis Armstrong, but a new book reveals them to the world.

"The Jazz Pictures the FBI Silenced---Fearing for her safety, Lisette Model buried her photos of artists like Billie Holiday and Louis Armstrong, but a new book reveals them to the world" hyperallergic.com/the-jazz-pic...

28.02.2026 19:31 β€” πŸ‘ 21    πŸ” 7    πŸ’¬ 0    πŸ“Œ 0

It’s packed ❀️πŸ₯°

No to war in Iran!

28.02.2026 20:18 β€” πŸ‘ 865    πŸ” 203    πŸ’¬ 7    πŸ“Œ 1

Yes

28.02.2026 21:43 β€” πŸ‘ 46    πŸ” 15    πŸ’¬ 1    πŸ“Œ 0
Preview
OpenAI strikes deal with Pentagon after Trump orders government to stop using Anthropic On X, Defense Secretary Pete Hegseth said he had moved to label Anthropic as a "supply chain risk" and cancel Defense business with the company.

One way to read the AI/Pentagon news from last night (I covered it but didn't skeet) is that the Department of Defense wants AI to automate weapons and/or spy on Americans and that Anthropic would have the bestΒ AI to do that, but OpenAI is at least the second-best so they'll just use that instead.

28.02.2026 19:29 β€” πŸ‘ 142    πŸ” 64    πŸ’¬ 11    πŸ“Œ 7
Preview
Melania Trump will preside over a UN Security Council meeting in a first for a first lady U.S. first lady Melania Trump will preside over a U.N. Security Council meeting in what the United Nations said Thursday would be a first.

File under: the new multilateralism apnews.com/article/mela...

28.02.2026 18:40 β€” πŸ‘ 75    πŸ” 13    πŸ’¬ 47    πŸ“Œ 46
Preview
Iran Strikes Feel Like 2003 All Over Again Less than a year ago, US President Donald Trump gave a speech in the Middle East in which he excoriated his predecessors for their habit of launching β€œforever wars” in that region. Alluding to the Ame...

β€œIf ever history demanded that Congress reclaim its constitutional monopoly in declaring war, that moment is now.”
www.bloomberg.com/opinion/arti...

28.02.2026 17:45 β€” πŸ‘ 299    πŸ” 75    πŸ’¬ 24    πŸ“Œ 6
Preview
At least 63 girls killed in strike on school in southern Iran Eyewitness tells MEE girls aged between seven and 12 seen lying dead across their school

This account includes an eyewitness.

Also: β€œAt least 85 people, almost all of them young girls, have been killed in an air strike on a primary school in southern Iran, the Iranian judiciary said.”

28.02.2026 16:20 β€” πŸ‘ 166    πŸ” 93    πŸ’¬ 3    πŸ“Œ 16
Preview
Opinion | Trump’s Strikes on Iran Were Unlawful. Here’s Why That Matters.

The strikes on Iran are blatantly illegal. I explained in June why the strikes on Iran's nuclear facilities were unlawful under US and international law. Everything I wrote then is true today, but this is a far larger assault with far graver consequences.

www.nytimes.com/2025/06/23/o...

28.02.2026 12:38 β€” πŸ‘ 1832    πŸ” 603    πŸ’¬ 46    πŸ“Œ 22

Don’t worry everyone, American media is definitively on this shit.

On the liberal MSNOW you’ve got a former Republican congressman talking about his conversation with Trump about the β€œterrorist regime” and over on CNN Wolf Blitzer is asking guests how they come up with the names of the operations.

28.02.2026 14:33 β€” πŸ‘ 272    πŸ” 51    πŸ’¬ 16    πŸ“Œ 4
Preview
How US tech giants supplied Israel with AI models, raising questions about tech's role in warfare U.S. tech giants have quietly empowered Israel to track and kill many more alleged militants more quickly in Gaza and Lebanon through a sharp spike in artificial intelligence and computing services.

I would like to remind people that AI has already been used to kill people

OpenAI and Microsoft supplied Israel with AI models to track and kill people in Gaza which led to increased civilian deaths

apnews.com/article/isra...

28.02.2026 14:26 β€” πŸ‘ 1439    πŸ” 697    πŸ’¬ 26    πŸ“Œ 21