We need better Democrats.
Not a single dollar more for Trumpβs illegal war.
We need better Democrats.
Not a single dollar more for Trumpβs illegal war.
On another day where joy is in short supply...might I suggest taking a journey with De La Soul and yesterday's Tiny Desk Concert
youtu.be/5AVYDHTOixU?...
this is a must read π₯π₯
04.03.2026 20:46 β π 88 π 46 π¬ 2 π 0Honored and delighted that Professor @julietschor.bsky.social will deliver the 2026 IAS Lecture on Public Policy: βRethinking Platform Laborβ on March 10, 2026. Curated by the School of Social Science. In-person with recording available afterward. Register: www.ias.edu/events/lectu...
04.03.2026 21:46 β π 8 π 3 π¬ 0 π 0ππ€
04.03.2026 21:19 β π 1 π 0 π¬ 0 π 0
@alondra.bsky.social joins @djrothkopf.bsky.social to discuss the DoDβs partnerships with private-sector AI companies, the Pentagonβs push for unrestricted AI, and what this all means for our future. podcasts.apple.com/us/podcast/d...
youtu.be/ycevB5IDTFY
β.. a record 6% of workers in 401(k) plans administered by Vanguard Group took a hardship withdrawal. That is up from 4.8% in 2024 and a prepandemic average of about 2% ..β
@wsj.com
www.wsj.com/personal-fin...
Pleas save the date, Cambridge-Boston friends histsci.fas.harvard.edu/announcing-a...
04.03.2026 13:12 β π 107 π 27 π¬ 5 π 0Great resource for anyone engaged or interested in US AI policy and governance from @geomblog.bsky.social and his team www.brown.edu/news/2026-03...
04.03.2026 13:09 β π 47 π 26 π¬ 0 π 0Join us April 18β20, 2026 at The Institute for Advanced Study in Princeton, NJ for "Critical Intent as a Form of Life: A Conference in Honor of Didier Fassin." Open to all, in-person only. Spread the word and register: www.ias.edu/sss/critical...
03.03.2026 14:01 β π 18 π 5 π¬ 1 π 0Join us April 18β20, 2026 at The Institute for Advanced Study in Princeton, NJ for "Critical Intent as a Form of Life: A Conference in Honor of Didier Fassin." Open to all, in-person only. Spread the word and register: www.ias.edu/sss/critical...
03.03.2026 14:01 β π 18 π 5 π¬ 1 π 0When you start a war with a country of 90 million but without a compelling or agreed-upon explanation as to why
03.03.2026 02:47 β π 1248 π 185 π¬ 32 π 6@bradleyonishi.bsky.social @profsamperry.bsky.social
03.03.2026 03:10 β π 3 π 0 π¬ 1 π 0βMany of their commanders are especially delighted with how graphic this battle will be zeroing in on how bloody all of this must become in order to fulfill and be in 100% accordance with fundamentalist Christian end of the world eschatology.β
03.03.2026 03:09 β π 582 π 265 π¬ 39 π 43
On Polymarket, transactions are visible but identities are not. This invites not only insider trading but also, as recent events suggest, potential betting directly on & profiting from geopolitical violence, as CASBS fellow @rajivsethi.bsky.social explains
β‘οΈ rajivsethi.substack.com/p/trading-on...
Using foundation models in national security contexts may introduce unique concerns threatening human rights. For example, a governmentβs ability to train models on citizensβ data obtained through commercial data brokers that would otherwise need a warrant, court order, or subpoena to obtain may allow governments to further exercise coercive powers that are automated through AI decision-making [6]. Such use may subvert due process, exacerbated when inaccurate outputs inflict unjust harms on civilians. Appropriate interventions may include the extension of data minimization principles to include purpose limitations on the collection, processing, and transfer of personal data to third parties for intelligence purposes.
The Atlantic notes how the Pentagon wants to "analyze bulk data collected from Americans." From our "Mind the Gap" paper 2024, a snippet I have come back to what seems like dozens of time at this point.
www.theatlantic.com/technology/2...
i'm no fancy military strategist but "we didn't really plan so we're going to run out of munitions soon" seems not great
02.03.2026 01:19 β π 384 π 108 π¬ 23 π 37
Forgot about Bahrain.
But while we're at it, add another one to this list: "Board of Peace" member Cyprus.
Al-Jazeera is reporting that a UK base in Cyprus was just hit by a drone strike. The UK had just authorized the US to use it to strike Iranian targets.
It's time! Nominations are now open for the 2026 Ursula K. Le Guin Prize for Fiction, which will be given to a work of imaginative fiction, published in 2025, that reflects the concepts and ideas that were central to Ursulaβs own work.
01.03.2026 15:33 β π 568 π 245 π¬ 3 π 23Gift of Mrs. David M. Levy
Jacob Lawrence, Housing for the Negroes was a very difficult problem, 1940-41
https://botfrens.com/collections/14377/contents/1134645
Tech execs provide their kids with liberal arts education to ensure their futuresβ-all while many state schools are becoming all STEM. www.wsj.com/lifestyle/ca...
01.03.2026 16:13 β π 220 π 97 π¬ 5 π 11It's getting to be you can't even profit off war deaths on the insider trading app anymore
01.03.2026 17:31 β π 413 π 80 π¬ 10 π 1
Congrats to @ncaaup.bsky.social & all who fought against this surveillance policy that would've allowed admin to hijack microphones in the classroom for secret recordings.
This move would've chilled classroom discussion & suppressed students' willingness to ask questions & take intellectual risks.
Six suspected insiders made $1.2M betting on a US strike on Iran
these wallets
β’ were funded in the last 24h
β’ specifically bet for February 28
β’ bought "yes" hours before the strikes
Claude goes to war in Iran: "Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic, President Trump launched a major air attack in Iran with the help of those very same tools."
01.03.2026 11:47 β π 106 π 67 π¬ 3 π 9
OpenAI posted the terms of the deal. Reveals that it absolutely does allow for domestic surveillance. EO 12333 is how the NSA hides its domestic surveillance by capturing communications by tapping into lines *outside the US* even if it contains info from/on US persons.
openai.com/index/our-ag...
User Chris: What was the core difference why you think the DoW accepted OpenAI but not Anthropic Sam Altman: I can't speak for them, but to speculate with the best understanding of the situation. *First, I saw reporting that they were extremely close on a deal, and for much of the time both sides really wanted to reach one. I have seen what happens in tense negotiations when things get stressed and deteriorate super fast, and I could believe that was a large part of what happened here. *We believe in a layered approach to safety--building a safety stack, deploying FDEs and having our safety and alignment researcher involved, deploying via cloud, working directly with the DoW. Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with. We feel that it it's very important to build safe system, and although documents are also important, I'd clearly rather rely on technical safeguards if I only had to pick one. *We and the DoW got comfortable with the contractual language, but I can understand other people would have a different opinion here. *I think Anthropic may have wanted more operational control than we did
I saw some folks asking what the difference was between what OpenAI signed with the DoD and what Anthropic said they wanted, and Sam more or less admits here the key point: OpenAI's deal requires them to trust the NSA. Anthropic's contract had real safeguards.
01.03.2026 04:38 β π 2424 π 601 π¬ 26 π 50We continue to build a team worthy of New Yorkers and the bold ambitions of the Mamdani administration. The latest addition is Diya Vij. Together we will make sure that art & artists are celebrated, culture and community are valued, and all of it creates more economic justice. Welcome aboard, Diya!
01.03.2026 00:21 β π 31 π 3 π¬ 1 π 0Screenshot of New York Times article: βOpenAI Reaches A.I. Agreement With Defense Dept. After Anthropic Clashβ
Screenshot of text from article: βUnder the deal, OpenAI agreed to let the Pentagon use its A.I. systems for any lawful purpose, a term required by the Pentagon. But OpenAI also said it had found a way to ensure that its technologies would adhere to its safety principles by installing specific technical guardrails on its systems.β
My major takeaway from the last year of reporting on generative AI chatbots is that safety guardrails can fail when conversations run long and that everyone who works in this space knows that
www.nytimes.com/2026/02/27/t...
UN Secretary General Antonio Guterres: βMilitary action carries the risk of igniting a chain of events that no one can control in the most volatile region of the world"
28.02.2026 21:44 β π 12628 π 3825 π¬ 427 π 192