ICYMI: Cloud inquiry chair quits UK competition watchdog over glacial pace of reform
05.03.2026 09:37 — 👍 6 🔁 7 💬 0 📌 0ICYMI: Cloud inquiry chair quits UK competition watchdog over glacial pace of reform
05.03.2026 09:37 — 👍 6 🔁 7 💬 0 📌 0In 2024 the San Francisco-based Anthropic deployed its model across the US Department of War and other national security agencies to speed up war planning. Claude became part of a system developed by the war-tech company Palantir with the Pentagon to “dramatically improve intelligence analysis and enable officials in their decision-making processes”. “The AI machine is making recommendations for what to target, which is actually much quicker in some ways than the speed of thought,” said Craig Jones, a senior lecturer in political geography at Newcastle University and an expert in kill chains. “So you’ve got scale and you’ve got speed, you’re [carrying out the] assassination-style strikes at the same time as you’re decapitating the regime’s ability to respond with all the aerial ballistic missiles. That might have taken days or weeks in historic wars. [Now] you’re doing everything at once.” The latest AI systems can rapidly analyse mountains of information on potential targets from drone footage to telecommunications interceptions as well as human intelligence. Palantir’s system uses machine learning to identify and prioritise targets and recommend weaponry, accounting for stockpiles and previous performance against similar targets. It also uses automated reasoning to evaluate legal grounds for a strike.
“This is the next era of military strategy and military technology,” said David Leslie, professor of ethics, technology and society at Queen Mary University of London, who has observed demonstrations of AI military systems. He also warned that reliance on AI can result in “cognitive off-loading”. Humans tasked with making a strike decision can feel detached from its consequences because the effort to think it through has been made by a machine. On Saturday 165 people, many children, were killed in a missile strike that hit a school in southern Iran, according to state media. It appeared to be close to a military barracks and the UN called it “a grave violation of humanitarian law”. The US military has said it is looking into the reports.
In the days before the Iran strikes, the US administration had said it would banish Anthropic from its systems after it refused to allow its AI to be used for fully autonomous weapons or surveillance of US citizens. But it remains in use until it is phased out. Anthropic’s rival, OpenAI, quickly signed its own deal with the Pentagon for military use of its models. “The advantage is in the speed of decision-making, the collapsing of planning from what might have taken days or weeks before to minutes or seconds,” said Leslie. “These systems produce a set of options for human decision makers but [they’ve] got a much narrower time band … to evaluate the recommendation.” “The deployment of AI is expanding,” said Prerana Joshi, research fellow at the Royal United Services Institute, a defence thinktank. “It is being done across countries’ defence estates … across logistics, training, decision management, maintenance.” She added: “AI is a technology that will allow decision makers, and anyone in that chain, to improve the productivity and efficiency of what they do. It’s a way of synthesising data at a much faster pace that is helpful to decision makers.”
This article and the academics quoted are a stunning illustration of how both media and academia have fundamentally failed to recognise how a random number generator is being used to widen the already-fucking-wide permission space for mass murder
Both now helping that project
archive.ph/wip/RlMO5
🚨 It looks like the UK government is gearing up to upend copyright law in favour of AI companies, legalising the theft of their work.
This is despite creatives' huge protests, and despite previous proposals being roundly rejected by the public.
Please spread the word.
🧵 1/4
Full answer at 13:15 -->>
"It's data centers and it's nothing but data centers. And the people who start trying to muddy the waters with EVs and electrification, that's not true. Absolutely not true. There's no data to support it. It's it's not true. Our data shows that it's data centers"
US Border Patrol admits using Real-Time Bidding (RTB) data to track people's movements.
The failure to enforce against the RTB data breach at the heart of online advertising is very, very dangerous.
Big scoop by @josephcox.bsky.social @404media.co!
www.404media.co/cbp-tapped-i...
It's a simple way of putting it, but I hope the fact that one group of data centres powering one shitty, evil chatbot for one crappy little social media site undoing most or all of Tesla's entire global climate benefits puts in perspective how WILDLY bloated genAI is as software
02.03.2026 19:56 — 👍 117 🔁 49 💬 4 📌 3
www.ukri.org/opportunity/...
Just in time for the burst bubble and next AI winter?
Take care not to become a parasite; do not lazily appropriate the results of other people’s labour, but learn and labour truly to get your own living. Take care that everything you possess, whether physical, mental, or spiritual, shall be the result of your own toil as well as other people’s; and remember that you are bound to pay, in some shape or way, everyone who helps you.
📚
And some more advice from Mary E. Boole (1909, p. 33) which oddly reads as highly relevant for AI today … 👀
21/🧵
Rapid AI-driven development makes security unattainable, warns Veracode
26.02.2026 15:31 — 👍 4 🔁 2 💬 0 📌 0
Anyone still using ChatGPT supports fascism
quitgpt.org
AIs can’t stop recommending nuclear strikes in war game simulations
Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases
www.newscientist.com/article/2516...
The creator of the AI agent “Einstein” wants to free humans from the burden of academic labor. Critics say that misses the point of education entirely.
25.02.2026 14:48 — 👍 582 🔁 132 💬 80 📌 257“As tech giants force-feed AI inevitability marketing and trumpet the technology’s supposed benefits to society, a robust AI-resistance movement has emerged in response.”
25.02.2026 01:45 — 👍 133 🔁 57 💬 4 📌 7AI Added 'Basically Zero' to US Economic Growth Last Year, Goldman Sachs Says https://gizmodo.com/ai-added-basically-zero-to-us-economic-growth-last-year-goldman-sachs-says-2000725380
23.02.2026 17:45 — 👍 2324 🔁 917 💬 89 📌 441
Shoshana Zuboff’s new film argues for the abolition of the business model behind social media.
observer.co.uk/culture/inte...
Irish Data Protection Commission was asked today in Committee: have you ever taken a GDPR decision on Google?
Answer... No.
Ireland is responsible for supervising Google's data use across the whole EU. It has produced no decisions.
Kudos @sineadgibney.bsky.social for asking the question.
1. Kevin Gross and I have a new paper out today PLOS Biology.
We used economic models based around screening games and the market for unpaid labor to highlight a meltdown cycle threatening peer review.
This is a mental war that we have to win. It's not about being a luddite, it's about destroying or retaining the ability to think. It's about how insidious this becomes in a situation where generative AI is adopted and then subverted intentionally.
www.thealgorithmicbridge.com/p/a-new-whar...
DIMON, on AI.
(via @yahoofinance.com)
The creator of Nearby Glasses made the app after reading 404 Media's coverage of how people are using Meta's Ray-Bans smartglasses to film people without their knowledge or consent. “I consider it to be a tiny part of resistance against surveillance tech.”
24.02.2026 16:15 — 👍 1096 🔁 503 💬 17 📌 35
when people talk about AI used in moderation of social media posts, you must remember there is always the human-in-the-loop, often a woman, often in the Global South, always dehumanised and forgotten
1/n
www.theguardian.com/global-devel...
A ‘bot-lash’ is brewing with various popular movements asking accountability of AI companies, their political, environmental, social and economic effects.
Efforts are currently disjointed, but span the political spectrum and could grow into a significant force ↘️
giftarticle.ft.com/giftarticle/...
New papers from me and @carefultrouble.bsky.social today on the safe adoption of AI - exploring what "safe adoption" means and whether its possible. We analysed more than 300 sources on AI safety in critical systems, outcomes for workers, and environmental impacts and found 5 common barriers
24.02.2026 11:59 — 👍 28 🔁 15 💬 2 📌 0surely we can teach the children how to ethically use this product designed for unethical use by unethical people
23.02.2026 15:15 — 👍 1176 🔁 139 💬 4 📌 10NEW: Meta’s director of AI safety, supposedly the person at the company who is working to make sure that powerful AI tools don’t go rogue and act against human interests, had to scramble to stop an AI agent from deleting her inbox against her wishes...
23.02.2026 15:23 — 👍 1851 🔁 705 💬 63 📌 284
I missed the bluesky vs X discourse but as always you can start everyone one of these debates with a simple question!!!!!
- Does social media website come packaged with a software system designed to freely generate child abuse materials on request
🔳 Yes
🔳 No
One of the greatest bollard walks in history.
#WorldBollardAssociation
the headline "big tech says generative AI will save the planet. it doesn't offer much proof" and then the photo of andy windsor lookin freaking out in the back seat of a cop car
Our shocking new findings are out now!
ketanjoshi.co/2026/02/17/b...