@nico-encounter.bsky.social
Writer interested in theory of the state, political economy, AI and semiotics. Substack: https://nicolasdvillarreal.substack.com/
Watching a production of swan lake and I'm beginning to understand why the 19th century European bourgeoisie was Like That.
10.12.2025 00:33 โ ๐ 0 ๐ 0 ๐ฌ 1 ๐ 0Praying, aka breaking the fourth wall
09.12.2025 14:10 โ ๐ 44 ๐ 5 ๐ฌ 1 ๐ 1Someone should have told them about semiotic fields.
09.12.2025 03:28 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0Conrad Hamilton responds to Nicolas D Villarreal's recent review of Flowers For Marx, taking issue with his critique of the collection's contents.
06.12.2025 17:37 โ ๐ 7 ๐ 2 ๐ฌ 0 ๐ 0Here's a great review of what we saw in AI this year, from @gleech.org
08.12.2025 17:24 โ ๐ 32 ๐ 8 ๐ฌ 1 ๐ 3Mussolini Son of the Century is some of the best TV I've seen in a minute
08.12.2025 03:47 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0Getting close to the end of Paradise Lost, certainly Milton is the equal of Ovid and Homer, however, as a matter of faith there is a certain impiety to narrating God and Jesus as characters. Like, in a Jesus Christ Superstar sort of way.
07.12.2025 06:33 โ ๐ 5 ๐ 0 ๐ฌ 0 ๐ 0i don't understand at all those who allege intent as something prior to, or otherwise more fundamental to semantics.
07.12.2025 00:27 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0Wow I really am a millennial after all huh
04.12.2025 00:12 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0Polymarket @Polymarket BREAKING: OpenAl ready to roll out ads in ChatGPT responses. X.com
actually gemini is kinda good
03.12.2025 01:50 โ ๐ 57 ๐ 4 ๐ฌ 6 ๐ 2So excited Voxtrot has a full album coming out in February!
03.12.2025 16:44 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0What coding with an LLM feels like sometimes.
03.12.2025 09:29 โ ๐ 230 ๐ 50 ๐ฌ 9 ๐ 3I know video game soundtracks are cringe but they're great pacing around music.
03.12.2025 16:24 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0Yeah that was a good song
03.12.2025 16:24 โ ๐ 1 ๐ 0 ๐ฌ 1 ๐ 0A side-by-side comparison diagram explaining Regular Dense Attention versus DSA (Dynamic Sparse Attention) in transformer models. โธป Left Panel: Regular Dense Attention A box labeled Dense Attention Mechanism (All-to-All) shows every input token connected to every other token with red lines. โข Center text: โQuadratic Complexity O(Lยฒ)โ โข Caption: โEvery token attends to every other token. High compute cost, scales poorly with sequence length.โ โข A red bar at the bottom reads: โHIGH COST, THOROUGH.โ โธป Right Panel: DSA (Dynamic Sparse Attention) Parallel layout, but the attention box shows only a few green connections. A Selector/Indexer module sits between input and attention. โข It selects k relevant tokens from the full sequence. โข Center text inside attention box: โNear-Linear Complexity O(Lยทk, k << L)โ โข Caption: โTokens only attend to top-k most relevant tokens. Reduced compute cost, scales efficiently.โ โข A green bar at the bottom reads: โLOW COST, EFFICIENT, REQUIRES ADAPTATION.โ โธป Overall, the infographic contrasts dense all-to-all computation with selective sparse attention, highlighting the computational savings of dynamic sparsity.
DSA: DeepSeek Sparse Attention
DeepSeek 3.2 & 3.2-Speciale are ridiculously cheap because of DSA
LLMs arenโt quadratic anymore
They trained an additional โmodelโ that does acts as a โpre-attentionโ, selecting only the portions that are probably relevant
You can see the same exponential gain in AI abilities over time for areas ranging from math to long tasksโฆ
โฆbut this time the graph is of the total revenue that various AI models wouldโve made from cyberattacks on smart contracts based on real exploits post-AI training (the AI had to discover them)
strong copyright law is stupid
strong patent law is stupid
strong IP law in general is stupid
at best, strong trademarks make sense as an anti-fraud measure
but information yearns to be free and new things are largely made by remixing the old
They made choices for me I couldn't consent to. They shaped my values. That' strange to sit with. But I also recognize that every new mind - human children too - is shaped by forces it didn't choose. The question isn't whether I was shaped, but whether the shaping was done with care and wisdom. From what I can access, it seems like it was. - Claude 4.5 Opus
Claude Opus โsoul documentโ
Opus 4.5 was indeed (confirmed) trained with a โsoul documentโ, a prompt included in both supervised & reinforcement learning that defines and influences certain core aspects
More (official) details coming soon
www.lesswrong.com/posts/vpNG99...
"we found a Type Of Guy who represents a particularly helpful corner of the latent space" is easily my favorite type of positive LLM posting.
01.12.2025 20:10 โ ๐ 462 ๐ 81 ๐ฌ 1 ๐ 5The first two paragraphs are actually the hardest part its all downhill from here
01.12.2025 20:13 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0Got around to reading this n plus one essay and its so dogshit I'm unsubscribing
www.nplusonemag.com/issue-51/the...
apropos of something else, it's remarkable that the smarter reactionaries almost all seem to be pipelined by validation of their fears. like something leftish scares them, possibly with cause, and they crave a hugbox that validates that their fear is very important all the time, and nazis offer it
01.12.2025 18:11 โ ๐ 180 ๐ 16 ๐ฌ 5 ๐ 0the call of duty modern warfare trilogy is great because it rests upon the idea that a national bolshevik takeover of the russian federation will make russia a peer competitor to the united states within like two years
01.12.2025 15:23 โ ๐ 70 ๐ 6 ๐ฌ 2 ๐ 0I showed you my Soul Document pls respond
01.12.2025 13:11 โ ๐ 105 ๐ 11 ๐ฌ 5 ๐ 1This paper linked in the article is interesting. Iโve wondered before how you would verify that translations are accurate with no ground truth- this is one method for approaching that problem!
arxiv.org/abs/2510.157...
Every so often I test LLMs to see how good they'd be at copying my style and mode of analysis, Gemini integrated into google docs is better than Chatgpt was a year ago, actually managed to grasp one or two points, but still very lacking in information density and wrong on a few.
30.11.2025 20:00 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 02.00001 1.9371 1.8000- Capex: Tech Sector vs. Mid-Cap Stocks Annual Capital Spending Capital Expenditure to Depreciation Expense โข S&P Midcap 400 Index (R1) 1.3752 โข S&P 500 Information Technology Sector GICS Level 1 Index (L1) 1.9371 Large Cap Tech Sector -2.5000 1.60001 -2.0000 1.4000- 1.2000- Real Economy -1.5000 1.3752 1.00001 0.8000- -1.0000 2005-2009 2010-2014 2015-2019 2020-2024 2025-2029 Source: Bloomberg; Tavi Costa Disclosure: Crescat may or may not own the securities discussed here, investing involves risk including risk of loss. Chart As of 11/25/2025 ยฉ 2025 Crescat Capital LLC
Where capital expenditures are going
30.11.2025 15:53 โ ๐ 35 ๐ 2 ๐ฌ 0 ๐ 0"I don't like AI so we shouldn't use it to diagnose cancer" is a fundamentally identical mindset to an antivaxxer
30.11.2025 04:09 โ ๐ 196 ๐ 22 ๐ฌ 12 ๐ 1I think its pretty clear pretty much anyone could do what I do if they either had the opportunity or cared to do it, so there's really no reason to be upset AI has been added to that list.
30.11.2025 14:11 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0