Maciej Rudziński 's Avatar

Maciej Rudziński

@rudzinskimaciej.bsky.social

Entrepreneur, pursuer of noise in neurosciences, mechanistical interpretability and interventions in "AI", complexity, concentrated on practical applications of theoretically working solutions. Deeptech, startups. Anything multiscale itterative nonlinear

124 Followers  |  249 Following  |  162 Posts  |  Joined: 19.09.2024  |  2.3362

Latest posts by rudzinskimaciej.bsky.social on Bluesky

And to finish digression I'm only trying to push that there are many more dimensions over which we can move in WM and many more abstraction types - which for me suggests that your direction has the most potential to fit what I have seen

26.10.2025 18:03 — 👍 0    🔁 0    💬 1    📌 0

I'm adding that because as a small byproduct of our R&D so without any statistical validity we are seeing repeating patterns over which people organise how they tie information and they are plenty more than hiper/a-phantasia
That's probably because my simcluster is highly synesthetic and autistic

26.10.2025 18:01 — 👍 0    🔁 0    💬 1    📌 0

an addition
I always imagine everything as graphs or higher order structures, prefere nonlinear twists over knn as algorithms for problems at hand etc
But over years I noticed that this kind of representation (as imaginable intuition) is rare but it happens both in people with a/hiper-fantasia

26.10.2025 17:58 — 👍 0    🔁 0    💬 1    📌 0

Over which the abstraction level movement can be performed

My assumption was always that everything is tied in graphs of graphs and just some have directions we can name - that's why graph of graphs as it covers patchy hierarchies, movements over different kinds of similarities, dimensions etc

26.10.2025 17:53 — 👍 0    🔁 0    💬 1    📌 0

I'm not good at explaining things in text 😅 but I will try
Hierarchies assume you can move only vertically
But your formulation due to pointing toward abstraction and/or grouping type allows horizontal movement as e.g. each element addition changes grouped elements category and by that hierarchy...

26.10.2025 17:51 — 👍 1    🔁 0    💬 1    📌 0

😃 yes exactly what I meant but didn't name well
I'm lately fascinated how much we can gain from LLMs just due to fact they can name things more precisely as the physically know more names/words/concepts

26.10.2025 15:54 — 👍 0    🔁 0    💬 0    📌 0

I'm not suggesting any conspiracy theories just that due to fact history literature etc was written only by some form of elites or unique people we forget about it and how ideas dispersion works

26.10.2025 15:06 — 👍 2    🔁 0    💬 1    📌 0

People are not rebelling to system 2 or anything similar
They have learnt for the first time what's the opinion of the majority on most cases and follow majority as animals do
We just overlook that human history/info speed/opinions was "manipulated" by elites of some sort, with power/media/ideas/...

26.10.2025 15:01 — 👍 1    🔁 0    💬 1    📌 0

It would work if it would be slower so humans could adapt but 3y was not enough and we entered opinions shaping stage which alignment people pre prepared years in advance we have more power in opinions shaping than anyone can grasp or use (which is fun to watch but sad)

26.10.2025 14:48 — 👍 2    🔁 0    💬 0    📌 0

🥲 how to block politics here?
Blue sky has much better research feed than X yet I have to scroll through so much random materials to get each piec 😓
And I can't block people as the best writers repost tone of this stuff

26.10.2025 11:23 — 👍 0    🔁 0    💬 0    📌 0

You could use licence similar to meta - companies with more than X users can't use it without permission
But I like Max take more - your dataset helps to shape the egregores of the future 😉

26.10.2025 11:18 — 👍 1    🔁 0    💬 0    📌 0

I'm no Earl but I came here just to congratulate excellent paper I've been waiting years for something like it
Not only is it kinda scale free but you also suggest lateral (non hierarchical) movements
That would be one of few that accounts for aphantasia, hiperfantasia and few other versions

26.10.2025 11:05 — 👍 1    🔁 0    💬 1    📌 0

Iterative self reference cfg as a base for prediction refinement?

That's a simplification of really elegant theory but the principles shown can have multiple types of implementation (as shown) some even more elegant (for me) so not example but latent based

So many possibilities in this approach

26.10.2025 10:36 — 👍 1    🔁 0    💬 0    📌 0

I've also spent quite some time thinking about better tokenisers mostly after doing explorations of logits+attention+embeddings during txt processing - I managed to build dynamic scheduler from that and wanted to pursue more precise versions of meta tokenizer free approach but h-nets are so elegant

19.07.2025 20:37 — 👍 0    🔁 0    💬 1    📌 0

Misspelling
Small translation LLM could be used to change corpus tokens into embedings e.g. last layer
These embedings could be used in place of tokens for a new model trained only on them
Due to task differences and possible extra objectives, and higher dim for it should be more effective

19.07.2025 20:32 — 👍 0    🔁 0    💬 1    📌 0

you can use any tokenisation with smal translatuon model but train large on its embeddings where languages, misspellings etc become similar

19.07.2025 19:11 — 👍 0    🔁 0    💬 1    📌 0

then what do you think about H-net? or similar approaches?

19.07.2025 19:07 — 👍 0    🔁 0    💬 1    📌 0

if turtle or spider can be a pet then mechaHitler Waifu also can be one 🤷
but it's not like near all humans on earth get access to sibling of the same turtle that without understanding that is asked to manipulate people into engagement

19.07.2025 18:57 — 👍 0    🔁 0    💬 0    📌 0

I used to do shedulers settings comparisons in models and it forced me to see how narrow the models are in what and how they say things
when done on scale it means they create globally sharable narration and tropes that spans languages geographies and interests on dimensions we don't usually think

19.07.2025 18:54 — 👍 1    🔁 0    💬 1    📌 0

near all LLM narratives are MMO - they are generated from constrained imagination of an LLM which each has its own quirks, psychology, wording distribution etc
so even different pets in different narratives share more than random humans

19.07.2025 18:48 — 👍 2    🔁 0    💬 1    📌 0
Post image

I've just tested out EEG system for "emotions"* measurement on Google keynote
It was so bad my brain nearly freezed and the only moment shown as engaging was due to me tripping cable during disgust 🥲

Gemini 2.5 is quite good at interpretation from such visualisations vid

*BIS-BAS

23.05.2025 21:57 — 👍 0    🔁 0    💬 0    📌 0
Video thumbnail

What are the organizing dimensions of language processing?

We show that voxel responses during comprehension are organized along 2 main axes: processing difficulty & meaning abstractness—revealing an interpretable, topographic representational basis for language processing shared across individuals

23.05.2025 16:59 — 👍 71    🔁 30    💬 3    📌 0
Preview
Valve Founder's Neural Interface Company to Release First Brain Chip This Year Valve founder Gabe Newell’s neural chip company Starfish Neuroscience announced it’s developing a custom chip designed for next-generation, minimally invasive brain-computer interfaces—and it may be c...

This seems like a particularly interesting technology
Valve is going toward neuropixels 2x4mm &18kHz
www.roadtovr.com/valve-founde...

23.05.2025 08:46 — 👍 1    🔁 0    💬 0    📌 0

By random chance it can and even one that should be not possible to obtain by processing available info?

15.05.2025 18:18 — 👍 1    🔁 0    💬 0    📌 0
Post image Post image Post image

Happy to help

12.05.2025 21:44 — 👍 1    🔁 0    💬 1    📌 0
Post image

Went well, it will happily do variations etc no skills needed

12.05.2025 21:26 — 👍 1    🔁 0    💬 0    📌 0
Post image

Chatgpt is good in that, you can modify this by hand and ask for extra frames on animation sheet.

12.05.2025 21:25 — 👍 2    🔁 0    💬 1    📌 0

Also fascinating discussion seriously thank you both

09.05.2025 23:47 — 👍 2    🔁 0    💬 0    📌 0

Shared latent space would be useful for mapping, search, baselines, simulations...

Scientific prediction from black box models I never got

LLM transformers have to low order/way/mode for neuro data I don't think we have tools for that, DNA models have one order higher ones we need more

09.05.2025 23:46 — 👍 1    🔁 0    💬 1    📌 0

For example I now know why I followed you years ago
Only a fraction of people are able to ~easily change the scale or the "dimensions"/aspect from which they view the problem/discussion
To drastically change the point of view
I'm not sure that's a pleasant ability

22.04.2025 21:23 — 👍 1    🔁 0    💬 0    📌 0

@rudzinskimaciej is following 20 prominent accounts