It's time for the latest episode of "Everybody Hates Iran"...
A reminder that Saudi Arabia's military budget is nearly as high as Russia's (though they don't get as good of a buy for their money)
It's time for the latest episode of "Everybody Hates Iran"...
A reminder that Saudi Arabia's military budget is nearly as high as Russia's (though they don't get as good of a buy for their money)
Issam al-Da'alis (2025, Palestine)*+
Ahmed al-Rahawi (2025, Yemen)*+
Ali Khamenei (2026, Iran)
+: Nuance to the degree of state-links of the killing
*: Nuance to whether the target leads a "state"
But apart from these (and others).... ๐
Thomas Sankara (1987, Burkina Faso)+
Renรฉ Moawad (1989, Lebanon)+
Rafic Hariri (2005, Lebanon)+
Ahmed Yassin (2004, Hamas/Palestine)*
Ismail Haniyeh (2024, Hamas/Palestine)*
Hassan Nasrallah (2024, Hezbollah/Lebanon)*
Yahya Sinwar (2024, Hamas/Palestine)*
Ngo Dinh Diem (1963, South Vietnam)
Renรฉ Schneider (1970, Chile)+
Wasfi Tal (1971, Jordan)+
Salvador Allende (1973, Chile)
Ibrahim al-Hamdi (1977, North Yemen)+
Hafizullah Amin (1979, Afghanistan)
Omar Torrijos (1981, Panama)+
Bachir Gemayel (1982, Lebanon)+
Rashid Karami (1987, Lebanon)+
...
Zhang Zuolin (1928, China)
Armand Cฤlinescu (1939, Romania)
Josรฉ Abad Santos (1942, Philippines)
Hazza' al-Majali (1960, Jordan)
Patrice Lumumba (1961, Democratic Republic of the Congo)+
Rafael Trujillo (1961, Dominican Republic)
Sylvanus Olympio (1963, Togo)+
Abd al-Karim Qasim (1963, Iraq)+
...
100% on all five of those.
01.03.2026 01:14 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0I think it's an interception behind it.
01.03.2026 01:12 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0I read as "Epic Furry"
28.02.2026 19:27 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0
I'm not going to carry on this conversation any further.
To anyone reading this thread who actually wants to know how LLMs work, read the linked pages in the order provided
(I didn't include anything for attention, but just google Word2Vec & GloVe for a primitive version)
Without all of them, you do not have a LLM. The closest real-world thing to what you're describing is a vector database.
en.wikipedia.org/wiki/Vector_...
Your description of latent spaces was pretty good (apart from describing it as linguistic). But LLMs don't "work" on latent spaces. Latent spaces are one of three separate things that come together to enable Transformers, alongside attention and DNNs. You have to understand all of them.
28.02.2026 19:01 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0FFNs are what *do the work* of finding the next latent position. Which is often quite different than the input hidden state they're provided (add + norm, which we do to avoid the disappearing gradient problem, helps carry forward the earlier hidden state positions)
28.02.2026 19:01 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0You described LLMs solving problems by "whatever is near in the latent space". As if the word that if "mother" is encoded in a hidden state then the next word that comes should be "mama", "mammy", or any other position right next to "mother". It's not some sort of random latent walk.
28.02.2026 19:01 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0
Why does it matter that you leave out the largest part of how LLMs work?
Really?
The fact that what you're describing isn't LLMs at all? You're describing taking latent spaces and then just jumping to nearby latent positions, as if the FFNs don't exist at all - when in reality they make up the vast majority of the parameters of any model.
The next latent comes from DNNs.
(And the other bots aren't shown later because they're never in frame)
I think there is a bit of an assumption that you're familiar with FIRST. :)
The first frame with the blueprint is a diagram of their robot. Their robot has two parts which immediately disconnect (frame 3).
If you mean frame 2, FIRST competitions can take different forms - here was the last competition before this was drawn:
www.youtube.com/watch?v=bDo5...
It's not merely "fact lookup", it's "(fuzzy) logical inference performance". Facts are indeed stored in the FFNs, but so are the conditions in which those facts are the result of the input. And the logical rules on which deductions are made each layer are mind-bogglingly complex.
28.02.2026 17:59 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0
Which in turn are built off DNNs (Transformers FFNs are DNNs), so to understand how *they* learn and perform fuzzy logical deductions, I'd recommend:
www.youtube.com/watch?v=0Qcz...
To understand how circuits are built up from the base, I recommend:
distill.pub/2020/circuit...
Then understanding that each of these very high-level circuits is built on simpler circuits, iteratively down with each layer. E.g.:
transformer-circuits.pub/2024/scaling...
To understand LLMs, I recommend first starting here for the highest-level view:
transformer-circuits.pub/2025/attribu...
I wouldn't share that thread, IMHO.
bsky.app/profile/nafn...
LLMs are not "a hyperdimensional map of language use". *Latents* are a hyperdimensional map of *concepts*. This map is generated from (among other things, but usually no longer exclusively) language, but there's nothing linguistic about it.
But again, LLMs are not merely latents.
FFNs don't encode into the subsequent latent "whatever is nearest"; they function as detector-generators - *detecting* combinations of concepts in the latent, and then *generating* whatever concepts are the logical deductions from the combinations of concepts detected.
28.02.2026 17:50 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0This is not how LLMs work. You seem to understand latent spaces but you're forgetting about the FFNs. LLMs are not simply latent spaces; latent spaces just hold the conceptual representation that the FFNs work on.
28.02.2026 17:48 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 1It's-a-me! Dario! I'm 'a gonna be 'a responsible with'a me'a models!
28.02.2026 01:28 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0Maybe we should ask his sister about his character...
28.02.2026 01:15 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0Btw, if you are following the Anthropic debacle, and Minnesota is fresh in your mind, consider the limits your government was not willing to budge on, according to the Anthropic CEO: www.anthropic.com/news/stateme...
28.02.2026 00:26 โ ๐ 242 ๐ 82 ๐ฌ 10 ๐ 10