New story out at Foom, where I've written about how researchers of military AI have been increasingly shifting to consider strategic impacts, like whether AI will lead to new wars being started.
www.foommagazine.org/militaries-a...
This is ridiculous. Really bad from @arstechnica.com. This cannot happen.
theshamblog.com/an-ai-agent-...
You are arguing that digital ads are good because they empower Google, which you claim is good: ".. enable a powerful technology to become a global utility." But Google is not a public good or a public utility. It is profit driven. You are conflating providing utility with being a public utility
05.02.2026 15:23 β π 0 π 0 π¬ 0 π 0Strong disagree. You have to ask what ads actually are and whether they are good for society. Historically, when ads were sold by institutions like newspapers, they were part of an ecosystem that explicitly valued public service. Outside of such value systems, they are *not* good; self-evidently
05.02.2026 04:04 β π 1 π 0 π¬ 1 π 0
AGI is already here.
This is something I have felt now for a whileβglad to see a more formal argument put forward. There are important deficits with AI, for example, as described in arxiv.org/abs/2510.18212. But basically, it's here. And we need to deal with that.
www.nature.com/articles/d41...
In historical AI safety research, one of the kind of 'grand catastrophic risks' that was always talked about was having an intelligence explosion that wasn't controlled or regulated. Now, such research is increasingly pursued ... without a safety component.
www.foommagazine.org/is-research-...
Anytime I need a laugh, I go to Bari Weiss's comments
www.theguardian.com/media/2026/j...
Great to see our "From Language to Cognition" work featured in @mordecwhy.bsky.social's latest piece on language models and the brain. Glad to contribute to the conversation!
www.foommagazine.org/language-mod...
FOOM / NEW STORY OUT: "The results lend clarity to the surprising picture that has been emerging from the last decade of neuroscience research: That AI programs can show strong resemblances to large-scale brain regions."
www.foommagazine.org/language-mod...
It often feels like, in a mental health or depression context, whatever it is that is wrong with me is so deeply entrenched that neither I nor anyone else would ever be able to figure out what it is.
30.12.2025 17:34 β π 0 π 0 π¬ 0 π 0
In my latest for Foom, where I'm trying to provide free, high-quality, independent reporting on AI safety, I wanted to interview someone who could help me understand the challenging internal dynamics of the community. This was @ilex-ulmus.bsky.social.
www.foommagazine.org/the-moral-cr...
"When dealing with the Big Cats, who are literally killing machines, there is always a distinct energy or electricity when you are in their presence." -Leif Cocks.
Probably also the best description of humans
This kind of statement triggers me and it's the reason why science journalists and neuroscientists need to speak out, loudly, about the analogies discovered between DNNs and cortexes. People need to know we are not just dealing with an MS Word technology here
www.theguardian.com/lifeandstyle...
Do it
13.12.2025 01:01 β π 1 π 0 π¬ 0 π 0Absolutely right. This is as big of a risk right now as anything else. It's flabbergasting how current government thinks anyone is being fooled about this.
13.12.2025 01:00 β π 0 π 0 π¬ 0 π 0
New article covering recent findings from October: Models that maximize business performance in realistic role-play scenarios are also more likely to inflict harms.
www.foommagazine.org/leading-mode...
What does it mean when the study, the study's reviews, and all the other studies citing the study, which is actually a good and interesting study, all show clear signs of AI writing (without acknowledgement), lol
12.12.2025 02:37 β π 0 π 0 π¬ 0 π 0
Kinda fits with the picture of DNN models of brain regions and high capability DNN models also typically requiring high-dimensional spaces, I guess? (shameless plug)
www.foommagazine.org/scientists-m...
Lol as long as it's cute we're good
12.12.2025 02:31 β π 1 π 0 π¬ 0 π 0Good points. I think most intervention and pushback against the status quo here is probably good. I might nitpick that 'fully automating cancer cures' or 'accepting job displacement' could both be contended against, the first as oversimplified, the second for reasons you mentioned. Complex topic!
10.12.2025 22:11 β π 1 π 0 π¬ 0 π 0Is any alignment research valid if it does not engage with the fact that we are surrounded by highly misaligned technologies, in highly misaligned societies, created by highly misaligned individuals?
07.12.2025 23:44 β π 0 π 0 π¬ 0 π 0Yes, BCIs seem in need of stringent regulation probably more than any other technology ever invented. The regulation vacuum for AI does not lend optimism. We are going to need serious public interest advocacy from neuroscientists if this is going to be anything besides severely dystopian
04.12.2025 21:00 β π 1 π 0 π¬ 0 π 0It's really strange to me how so many researchers in fields I follow have their names on 20-30 papers a year. I realize much of this comes down to co-authorship, but even at that level, it's a mind boggling degree of networking or credit sharing. (Not questioning the work, just the social norms)
04.12.2025 18:43 β π 1 π 0 π¬ 0 π 0
2/ This week, we need your help!
Take action at NewsNotSlop.org and follow the steps below, because holding media companies accountable on AI is going to take all of us.
WSJ editorial board giving absolutely disgusting support for undemocratic regime change; cancel your subscriptions. "If Maduro refuses to leave, and Trump shrinks from acting to depose him, Trump and the credibility of the US will be the losers.β
www.theguardian.com/world/2025/d...
What did it mean, in 2021, when researchers began to see strange shapes on the insides of their models? We now have a much more sophisticated understandingβmodels may be thinking in terms of shapes precisely because some concepts are intrinsically geometric.
www.foommagazine.org/scientists-m...
Interesting, I'll take a look! Mainly referring to issues from neuroAI, where neuroscientists have shown that we are kind of blandly sliding into the creation of technologies that are very seriously cortex-like or brain-like. (I self-published a 45-page journalism work about this.)
20.11.2025 20:57 β π 0 π 0 π¬ 1 π 0