Matthew Chalmers's Avatar

Matthew Chalmers

@matthewchalmers.bsky.social

Computer scientist into Ubicomp, HCI, theory and (a long time ago) data visualisation. Also kind of keen on mountain things, fine food things, and fine food in the mountains.

205 Followers  |  56 Following  |  214 Posts  |  Joined: 09.11.2024
Posts Following

Posts by Matthew Chalmers (@matthewchalmers.bsky.social)

Preview
Cloud inquiry chair quits UK competition watchdog over glacial pace of reform Kip Meeks walked a year early with the overseer of tech markets yet to take action against AWS and Microsoft The chair of the competition markets authority's cloud inquiry has quit, citing the slow pace of implementing recommendations outlined in a report it published in 2025 to boost market dynamics in Britain's cloud computing market.…

ICYMI: Cloud inquiry chair quits UK competition watchdog over glacial pace of reform

05.03.2026 09:37 — 👍 6    🔁 7    💬 0    📌 0
In 2024 the San Francisco-based Anthropic deployed its model across the US Department of War and other national security agencies to speed up war planning. Claude became part of a system developed by the war-tech company Palantir with the Pentagon to “dramatically improve intelligence analysis and enable officials in their decision-making processes”.

“The AI machine is making recommendations for what to target, which is actually much quicker in some ways than the speed of thought,” said Craig Jones, a senior lecturer in political geography at Newcastle University and an expert in kill chains. “So you’ve got scale and you’ve got speed, you’re [carrying out the] assassination-style strikes at the same time as you’re decapitating the regime’s ability to respond with all the aerial ballistic missiles. That might have taken days or weeks in historic wars. [Now] you’re doing everything at once.”

The latest AI systems can rapidly analyse mountains of information on potential targets from drone footage to telecommunications interceptions as well as human intelligence. Palantir’s system uses machine learning to identify and prioritise targets and recommend weaponry, accounting for stockpiles and previous performance against similar targets. It also uses automated reasoning to evaluate legal grounds for a strike.

In 2024 the San Francisco-based Anthropic deployed its model across the US Department of War and other national security agencies to speed up war planning. Claude became part of a system developed by the war-tech company Palantir with the Pentagon to “dramatically improve intelligence analysis and enable officials in their decision-making processes”. “The AI machine is making recommendations for what to target, which is actually much quicker in some ways than the speed of thought,” said Craig Jones, a senior lecturer in political geography at Newcastle University and an expert in kill chains. “So you’ve got scale and you’ve got speed, you’re [carrying out the] assassination-style strikes at the same time as you’re decapitating the regime’s ability to respond with all the aerial ballistic missiles. That might have taken days or weeks in historic wars. [Now] you’re doing everything at once.” The latest AI systems can rapidly analyse mountains of information on potential targets from drone footage to telecommunications interceptions as well as human intelligence. Palantir’s system uses machine learning to identify and prioritise targets and recommend weaponry, accounting for stockpiles and previous performance against similar targets. It also uses automated reasoning to evaluate legal grounds for a strike.

“This is the next era of military strategy and military technology,” said David Leslie, professor of ethics, technology and society at Queen Mary University of London, who has observed demonstrations of AI military systems. He also warned that reliance on AI can result in “cognitive off-loading”. Humans tasked with making a strike decision can feel detached from its consequences because the effort to think it through has been made by a machine.

On Saturday 165 people, many children, were killed in a missile strike that hit a school in southern Iran, according to state media. It appeared to be close to a military barracks and the UN called it “a grave violation of humanitarian law”. The US military has said it is looking into the reports.

“This is the next era of military strategy and military technology,” said David Leslie, professor of ethics, technology and society at Queen Mary University of London, who has observed demonstrations of AI military systems. He also warned that reliance on AI can result in “cognitive off-loading”. Humans tasked with making a strike decision can feel detached from its consequences because the effort to think it through has been made by a machine. On Saturday 165 people, many children, were killed in a missile strike that hit a school in southern Iran, according to state media. It appeared to be close to a military barracks and the UN called it “a grave violation of humanitarian law”. The US military has said it is looking into the reports.

In the days before the Iran strikes, the US administration had said it would banish Anthropic from its systems after it refused to allow its AI to be used for fully autonomous weapons or surveillance of US citizens. But it remains in use until it is phased out. Anthropic’s rival, OpenAI, quickly signed its own deal with the Pentagon for military use of its models.

“The advantage is in the speed of decision-making, the collapsing of planning from what might have taken days or weeks before to minutes or seconds,” said Leslie. “These systems produce a set of options for human decision makers but [they’ve] got a much narrower time band … to evaluate the recommendation.”

“The deployment of AI is expanding,” said Prerana Joshi, research fellow at the Royal United Services Institute, a defence thinktank. “It is being done across countries’ defence estates … across logistics, training, decision management, maintenance.”

She added: “AI is a technology that will allow decision makers, and anyone in that chain, to improve the productivity and efficiency of what they do. It’s a way of synthesising data at a much faster pace that is helpful to decision makers.”

In the days before the Iran strikes, the US administration had said it would banish Anthropic from its systems after it refused to allow its AI to be used for fully autonomous weapons or surveillance of US citizens. But it remains in use until it is phased out. Anthropic’s rival, OpenAI, quickly signed its own deal with the Pentagon for military use of its models. “The advantage is in the speed of decision-making, the collapsing of planning from what might have taken days or weeks before to minutes or seconds,” said Leslie. “These systems produce a set of options for human decision makers but [they’ve] got a much narrower time band … to evaluate the recommendation.” “The deployment of AI is expanding,” said Prerana Joshi, research fellow at the Royal United Services Institute, a defence thinktank. “It is being done across countries’ defence estates … across logistics, training, decision management, maintenance.” She added: “AI is a technology that will allow decision makers, and anyone in that chain, to improve the productivity and efficiency of what they do. It’s a way of synthesising data at a much faster pace that is helpful to decision makers.”

This article and the academics quoted are a stunning illustration of how both media and academia have fundamentally failed to recognise how a random number generator is being used to widen the already-fucking-wide permission space for mass murder

Both now helping that project

archive.ph/wip/RlMO5

03.03.2026 10:30 — 👍 113    🔁 46    💬 3    📌 3
Video thumbnail

🚨 It looks like the UK government is gearing up to upend copyright law in favour of AI companies, legalising the theft of their work.

This is despite creatives' huge protests, and despite previous proposals being roundly rejected by the public.

Please spread the word.

🧵 1/4

02.03.2026 15:43 — 👍 2884    🔁 2417    💬 92    📌 467

Full answer at 13:15 -->>

"It's data centers and it's nothing but data centers. And the people who start trying to muddy the waters with EVs and electrification, that's not true. Absolutely not true. There's no data to support it. It's it's not true. Our data shows that it's data centers"

03.03.2026 13:40 — 👍 82    🔁 33    💬 3    📌 2
Post image

US Border Patrol admits using Real-Time Bidding (RTB) data to track people's movements.

The failure to enforce against the RTB data breach at the heart of online advertising is very, very dangerous.

Big scoop by @josephcox.bsky.social @404media.co!

www.404media.co/cbp-tapped-i...

03.03.2026 14:50 — 👍 27    🔁 22    💬 1    📌 3
Preview
Musk’s fossil data centres are undoing Tesla’s climate benefit I know you already know the data centres built to power the generative AI software running on X are intensely harmful and wildly polluting, often in breach of rules and regulations. You may not kno…

It's a simple way of putting it, but I hope the fact that one group of data centres powering one shitty, evil chatbot for one crappy little social media site undoing most or all of Tesla's entire global climate benefits puts in perspective how WILDLY bloated genAI is as software

02.03.2026 19:56 — 👍 117    🔁 49    💬 4    📌 3
Preview
I checked out one of the biggest anti-AI protests ever Hundreds joined the march in London’s AI hub to warn against the harms that artificial intelligence could bring. I went along to see what they had to say.

www.technologyreview.com/2026/03/02/1...

02.03.2026 15:08 — 👍 0    🔁 0    💬 0    📌 0
Preview
Fundamental AI Research Lab Apply to lead the formation of a strategic research lab dedicated to advancing the UK’s position in fundamental artificial intelligence (AI) development. Up to £40 million delivered in stages is avail...

www.ukri.org/opportunity/...

Just in time for the burst bubble and next AI winter?

02.03.2026 10:23 — 👍 0    🔁 0    💬 0    📌 0
Take care not to become a parasite; do not lazily appropriate the results of other people’s labour, but learn and labour truly to get your own living. Take care that everything you possess, whether physical, mental, or spiritual, shall be the result of your own toil as well as other people’s; and remember that you are bound to pay, in some shape or way, everyone who helps you.

Take care not to become a parasite; do not lazily appropriate the results of other people’s labour, but learn and labour truly to get your own living. Take care that everything you possess, whether physical, mental, or spiritual, shall be the result of your own toil as well as other people’s; and remember that you are bound to pay, in some shape or way, everyone who helps you.

📚

And some more advice from Mary E. Boole (1909, p. 33) which oddly reads as highly relevant for AI today … 👀

21/🧵

28.02.2026 23:32 — 👍 41    🔁 9    💬 1    📌 0
Preview
Rapid AI-driven development makes security unattainable, warns Veracode Report claims more vulnerabilities created than fixed as remediation gap widens Veracode has posted its annual State of Software Security report, based on data from 1.6 million applications tested on its cloud platform, finding that more vulnerabilities are being created than are being fixed, and that high-velocity development with AI is making comprehensive security unattainable.…

Rapid AI-driven development makes security unattainable, warns Veracode

26.02.2026 15:31 — 👍 4    🔁 2    💬 0    📌 0
Preview
QuitGPT — OpenAI Execs are Trump's Biggest Donors Join the movement. Delete ChatGPT. Cancel your subscription. It's time to quit.

Anyone still using ChatGPT supports fascism

quitgpt.org

25.02.2026 22:28 — 👍 169    🔁 77    💬 2    📌 4
Preview
AIs can’t stop recommending nuclear strikes in war game simulations Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases

AIs can’t stop recommending nuclear strikes in war game simulations

Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases

www.newscientist.com/article/2516...

25.02.2026 12:27 — 👍 3051    🔁 1298    💬 392    📌 1480
Preview
What’s the Point of School When AI Can Do Your Homework? The creator of the AI agent “Einstein” wants to free humans from the burden of academic labor. Critics say that misses the point of education entirely.

The creator of the AI agent “Einstein” wants to free humans from the burden of academic labor. Critics say that misses the point of education entirely.

25.02.2026 14:48 — 👍 582    🔁 132    💬 80    📌 257
Preview
A burgeoning AI boycott plasters the NYC subway New Yorkers began vandalizing ads for dystopian AI products bombarding the city last fall, sparking a larger movement of AI rejection

“As tech giants force-feed AI inevitability marketing and trumpet the technology’s supposed benefits to society, a robust AI-resistance movement has emerged in response.”

25.02.2026 01:45 — 👍 133    🔁 57    💬 4    📌 7
Preview
AI Added 'Basically Zero' to US Economic Growth Last Year, Goldman Sachs Says Imported chips and hardware mean the AI investemtns are translating into US GDP growth.

AI Added 'Basically Zero' to US Economic Growth Last Year, Goldman Sachs Says https://gizmodo.com/ai-added-basically-zero-to-us-economic-growth-last-year-goldman-sachs-says-2000725380

23.02.2026 17:45 — 👍 2324    🔁 917    💬 89    📌 441
Preview
Shoshana Zuboff takes on the tech bros | The Observer The Harvard professor believes that Silicon Valley’s business model must be outlawed. Her new film about the tragic case of Molly Russell explains why

Shoshana Zuboff’s new film argues for the abolition of the business model behind social media.

observer.co.uk/culture/inte...

25.02.2026 08:23 — 👍 0    🔁 0    💬 0    📌 0
Video thumbnail

Irish Data Protection Commission was asked today in Committee: have you ever taken a GDPR decision on Google?

Answer... No.

Ireland is responsible for supervising Google's data use across the whole EU. It has produced no decisions.

Kudos @sineadgibney.bsky.social for asking the question.

24.02.2026 17:31 — 👍 119    🔁 71    💬 9    📌 5
Preview
Screening, sorting, and the feedback cycles that imperil peer review The process of peer review is vital to contemporary science, but is also under enormous strain. This study uses mathematical models to dissect the threats to the long-term viability of peer review, su...

1. Kevin Gross and I have a new paper out today PLOS Biology.

We used economic models based around screening games and the market for unpaid labor to highlight a meltdown cycle threatening peer review.

24.02.2026 20:54 — 👍 324    🔁 132    💬 8    📌 17
Preview
A New Wharton Study on AI Warns of a Growing Problem: Cognitive Surrender Casual users should pay special attention

This is a mental war that we have to win. It's not about being a luddite, it's about destroying or retaining the ability to think. It's about how insidious this becomes in a situation where generative AI is adopted and then subverted intentionally.

www.thealgorithmicbridge.com/p/a-new-whar...

24.02.2026 20:28 — 👍 674    🔁 239    💬 12    📌 25
Post image

DIMON, on AI.

(via @yahoofinance.com)

24.02.2026 15:40 — 👍 374    🔁 85    💬 46    📌 22
Preview
This App Warns You if Someone Is Wearing Smart Glasses Nearby The creator of Nearby Glasses made the app after reading 404 Media's coverage of how people are using Meta's Ray-Bans smartglasses to film people without their knowledge or consent. “I consider it to...

The creator of Nearby Glasses made the app after reading 404 Media's coverage of how people are using Meta's Ray-Bans smartglasses to film people without their knowledge or consent. “I consider it to be a tiny part of resistance against surveillance tech.”

24.02.2026 16:15 — 👍 1096    🔁 503    💬 17    📌 35
Preview
‘In the end, you feel blank’: India’s female workers watching hours of abusive content to train AI Women in rural communities describe trauma of moderating violent and pornographic content for global tech companies

when people talk about AI used in moderation of social media posts, you must remember there is always the human-in-the-loop, often a woman, often in the Global South, always dehumanised and forgotten

1/n

www.theguardian.com/global-devel...

24.02.2026 16:33 — 👍 549    🔁 350    💬 3    📌 23
Preview
The ‘botlash’ movement is gaining momentum Protesting and unsubscribing are being promoted as forms of modern-day strikes

A ‘bot-lash’ is brewing with various popular movements asking accountability of AI companies, their political, environmental, social and economic effects.
Efforts are currently disjointed, but span the political spectrum and could grow into a significant force ↘️

giftarticle.ft.com/giftarticle/...

24.02.2026 08:49 — 👍 31    🔁 14    💬 0    📌 5

New papers from me and @carefultrouble.bsky.social today on the safe adoption of AI - exploring what "safe adoption" means and whether its possible. We analysed more than 300 sources on AI safety in critical systems, outcomes for workers, and environmental impacts and found 5 common barriers

24.02.2026 11:59 — 👍 28    🔁 15    💬 2    📌 0

surely we can teach the children how to ethically use this product designed for unethical use by unethical people

23.02.2026 15:15 — 👍 1176    🔁 139    💬 4    📌 10
Preview
Einstein - AI Homework Agent Einstein logs into Canvas and does your homework automatically. He has his own computer — he can watch lectures, read essays, write papers, and participate in discussions.

Is this bad

23.02.2026 15:06 — 👍 1414    🔁 331    💬 159    📌 615
Preview
Meta Director of AI Safety Allows AI Agent to Accidentally Delete Her Inbox Meta Superintelligence Labs’ director of alignment called it a “rookie mistake.”

NEW: Meta’s director of AI safety, supposedly the person at the company who is working to make sure that powerful AI tools don’t go rogue and act against human interests, had to scramble to stop an AI agent from deleting her inbox against her wishes...

23.02.2026 15:23 — 👍 1851    🔁 705    💬 63    📌 284

I missed the bluesky vs X discourse but as always you can start everyone one of these debates with a simple question!!!!!

- Does social media website come packaged with a software system designed to freely generate child abuse materials on request

🔳 Yes
🔳 No

23.02.2026 09:10 — 👍 399    🔁 84    💬 10    📌 2
Video thumbnail

One of the greatest bollard walks in history.
#WorldBollardAssociation

22.02.2026 13:23 — 👍 5556    🔁 1247    💬 71    📌 109
the headline "big tech says generative AI will save the planet. it doesn't offer much proof" and then the photo of andy windsor lookin freaking out in the back seat of a cop car

the headline "big tech says generative AI will save the planet. it doesn't offer much proof" and then the photo of andy windsor lookin freaking out in the back seat of a cop car

Our shocking new findings are out now!

ketanjoshi.co/2026/02/17/b...

22.02.2026 07:30 — 👍 35    🔁 2    💬 3    📌 1