brasidas's Avatar

brasidas

@brasidas.bsky.social

Not Laconic

6,209 Followers  |  2,140 Following  |  7,655 Posts  |  Joined: 05.06.2023  |  2.2214

Latest posts by brasidas.bsky.social on Bluesky

I just wish weird psychosexual obsessions functioned the way they used to. 19th century weirdos aiming to end masturbation made us both a breakfast cereal and a tasty cracker. That NEVER happens today. Nobody tries to criminalize IUDs and accidentally invents a new kind of Milano.

03.08.2025 03:09 β€” πŸ‘ 2461    πŸ” 481    πŸ’¬ 45    πŸ“Œ 28

Grant's Memoirs is perhaps one of the best military books of all time.

If you like seeing the battlefield through the eyes of a genius.

04.08.2025 00:33 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Or your previous book about Boyd.

I get the impression it's that Boyd wasn't an academic...

04.08.2025 00:25 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Well, in a few months they can read my next book on Boyd and see all the things Boyd actually said about all the things.

03.08.2025 23:55 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

I'm really fine with saying that you have to accept some definitions and Clausewitz is a great start -- but also invoking Clausewitz in a paper about AI should raise a couple flags and be buttressed just a smidge...

03.08.2025 23:07 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I think there’s still prior questions.

β€œWhat, exactly, is command?”

And β€œwhy does war demand it?”

Luckily this is what I’m working on for my diss.

03.08.2025 22:55 β€” πŸ‘ 20    πŸ” 2    πŸ’¬ 2    πŸ“Œ 0

It's a great topic!

03.08.2025 23:06 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

So AlphaChess exploited weaknesses when it absolutely demolished Stockfish.

But most Chess engines are so incredibly strong that it is a weird conversation to have.

03.08.2025 22:48 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

So there's a reference architecture and everything...people choose to believe that CJADC2 is whatever they want it to be.

03.08.2025 22:29 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

How would you use modeling and simulation, wargaming, and AI to do better than the Royal Navy 125 years ago?

03.08.2025 22:18 β€” πŸ‘ 11    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

Right now, we (and just about every navy in the world) are preparing for a naval battle that has never happened - the closest example would be the Falklands - of long-range missile based naval combat.

03.08.2025 22:18 β€” πŸ‘ 19    πŸ” 1    πŸ’¬ 2    πŸ“Œ 1

I think about Jutland a lot: the Royal Navy created a force for dreadnaught battle -- a battle that had really never taken place -- and optimized their ships, tactics, and C2 systems for things that turned out not to matter. Worse, these TTPs were harmful.

03.08.2025 22:14 β€” πŸ‘ 15    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Yeah they think that AI, in picking a statistically likely answer, is doing inductive reasoning.

AI wishes it was doing inductive reasoning...

03.08.2025 22:11 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This is all a sort of false dilemma. The real question is "what sort of AI tool are useful for command and control?"

03.08.2025 22:10 β€” πŸ‘ 17    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The second issue I would raise is the idea that AI will replace command - I get that DARPA is investing in frameworks for this, but DARPA invests in a whole lot of wild things. CJADC2 is also not about replacing command with AI.

03.08.2025 22:10 β€” πŸ‘ 8    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

The real problem I see is that reinforcement learning is almost provably the wrong approach for a domain like this -- but that's not unique to AI. People probably learn worse!

03.08.2025 22:05 β€” πŸ‘ 13    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

If genius is the benchmark here, when history is replete with mediocrity and incompetence, and genius is probably more a reflection of luck, like what are even doing here?

03.08.2025 22:04 β€” πŸ‘ 13    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

War is a chaotic endeavor with extreme nonlinearity and kurtosis - there is no tight, game theoretic framework for success in war.

03.08.2025 21:59 β€” πŸ‘ 25    πŸ” 1    πŸ’¬ 1    πŸ“Œ 1

The first issue I would raise is that there is no real anthropology of command here - @ldfreedman.bsky.social and Clausewitz are introduced as reference points, which is great, but the question I would ask is, "are humans actually good at command?"

03.08.2025 21:57 β€” πŸ‘ 15    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1

And I absolutely agree with the premise here - that AI makes different decisions than those needed during wartime in command. But these arguments are so bad and uninformed about the technology they are arguing against, why bother?

03.08.2025 21:56 β€” πŸ‘ 12    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
AI success at isolated tactical interactions is taken to be evidence ofgreater things to come because of how AI optimists imagine tacticalsuccesses contributing to strategic goals. Here, the questionable intellec-tual legacies of John Boyd, John Warden, and Arthur Cebrowski hangover the arguments of AI optimists.45 Notions of Observe, Orient, Decide,Act (OODA) loops, hierarchies of centres of gravity, and the war-winningeffects of precision strike complexes are the intellectual primordial soupof AI optimists’ visions of war and strategy. The AI optimist literature notonly resembles the problems of the RMA debates and OODA loop con-cepts, but also what Michael Handel called the β€˜tacticisation of strategy’46,where lower-level operational considerations start driving strategic objec-tives, where the military means excessively influence the political ends of

AI success at isolated tactical interactions is taken to be evidence ofgreater things to come because of how AI optimists imagine tacticalsuccesses contributing to strategic goals. Here, the questionable intellec-tual legacies of John Boyd, John Warden, and Arthur Cebrowski hangover the arguments of AI optimists.45 Notions of Observe, Orient, Decide,Act (OODA) loops, hierarchies of centres of gravity, and the war-winningeffects of precision strike complexes are the intellectual primordial soupof AI optimists’ visions of war and strategy. The AI optimist literature notonly resembles the problems of the RMA debates and OODA loop con-cepts, but also what Michael Handel called the β€˜tacticisation of strategy’46,where lower-level operational considerations start driving strategic objec-tives, where the military means excessively influence the political ends of

Also, they hate Boyd for some, unexplained reason and I'd love them to bother to explain it:

03.08.2025 21:53 β€” πŸ‘ 16    πŸ” 0    πŸ’¬ 4    πŸ“Œ 0

Then there is the appeal to authority fallacy with Clausewitz - War Studies people invoke Clausewitz the way that Medieval Scholasticism used Aristotle.

03.08.2025 21:49 β€” πŸ‘ 18    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

Deep Blue did not use "brute force" in beating Gary Kasparov; the descriptions of AlphaGo and Watson are also superficial; I spent far too much time on the DARPA paper on The Battle of 73rd Northing...

03.08.2025 21:47 β€” πŸ‘ 20    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0

The authors get very into the weeds on AI across a bunch of epochs of AI technology in a way that lacks technical nuance:

03.08.2025 21:46 β€” πŸ‘ 12    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

And I don't think that's necessarily the case. In many respects, the most hysterical AI fails involve overly glib abductive reasoning.

03.08.2025 21:45 β€” πŸ‘ 17    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0

The article makes an interesting point about the difference between abductive and deductive reasoning -- which is quite interesting.

However, it claims -- dogmatically -- that AI can only do the latter while command requires the former.

03.08.2025 21:40 β€” πŸ‘ 20    πŸ” 1    πŸ’¬ 3    πŸ“Œ 0
Preview
We’ll never have a model of an AI major-general: Artificial Intelligence, command decisions, and kitsch visions of war Military AI optimists predict future AI assisting or making command decisions. We instead argue that, at a fundamental level, these predictions are dangerously wrong. The nature of war demands deci...

I finally got around to reading this article that was posted on here a bit ago - and while I agree with the sentiments, the actual structure of the argument was so bad that I ended up wanting to replace all of our GOs with AI by the end of it:

03.08.2025 21:38 β€” πŸ‘ 35    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

I don’t know what changed, but this is what it feels like to use this website now

03.08.2025 17:14 β€” πŸ‘ 4866    πŸ” 540    πŸ’¬ 27    πŸ“Œ 0

this is not true. many, many people are good thinkers without being good writers. people who *are* good writers tend to vastly overestimate what a component of intelligence it is. but I have edited many intelligent and thoughtful people who were at best average-to-mediocre writers ...

03.08.2025 18:11 β€” πŸ‘ 435    πŸ” 45    πŸ’¬ 42    πŸ“Œ 36
Post image 03.08.2025 14:11 β€” πŸ‘ 172    πŸ” 19    πŸ’¬ 1    πŸ“Œ 1

@brasidas is following 18 prominent accounts