S*QCU's Avatar

S*QCU

@sqcu.bsky.social

I like paper and moving pictures. sqcu.dev (@sameQCU on twitter)

162 Followers  |  113 Following  |  512 Posts  |  Joined: 24.04.2023  |  1.6395

Latest posts by sqcu.bsky.social on Bluesky

ouching at that one

03.08.2025 07:36 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

ahhh now this is a videogame

24.07.2025 17:18 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image Post image

it like, well, you see, i can't explain it.
there's a lot more code needed to improve really basic algorithmic features before there's time to show off what's working better than other image synthesizer tools.
these pictures should explain more than words can.

13.07.2025 04:30 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

oh yeah the slider thingy

13.07.2025 04:28 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

doing proper numbers

20.05.2025 03:36 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

mythopoietic, some kind of actual man-vs-nature & man-vs-fallibility-of-oaths system of conflict

03.05.2025 01:58 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

bsky.app/profile/norv...
"by scialabba in his shifting perception of hitchen's eloquence..."
dark souls item description prose

25.04.2025 19:01 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

there are actually some ways you can use an advanced ai image generation model which has a really obvious motif (BROWN lol its SEPIA in here haha) to inform dataset design, filtering, annotation, and maybe some kinds of augmentation? but it gets trickier and more labor heavy.

18.04.2025 00:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

unfortunately the statistical error signature to their model (dude BROWN) is so exquisitely varied in its presentation and manifest forms that they are really gonna struggle to use their current tech for synthetic data generation

18.04.2025 00:12 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

learning to relax the tension/attachment/binding which yearns (both) to game (and to suppress the yearning for gaming) will release a drain on your attention which necessarily must be present both while gaming and not gaming

14.04.2025 23:46 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

wu wei
relax expectations for accomplishment through deliberate effort
balance seemingly purposeful activities with seemingly purposeless ones (going on walks, attentively preparing simple foods, flickshotting heads) to better understand the cost and value of either

14.04.2025 23:45 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

type of guy who does a wavelet transform on his kids and says 'look its still peaking second' to the stats poster

11.04.2025 03:36 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

i was mortified for a moment that someone was writing arithmetic units from first principles and physically sighed with relief when it was factorio belts on the right hand side

11.04.2025 03:31 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

sparse coding of emoji so that storing a large maximum number of emoji per post doesn't crank db storage costs to product: dictionary_size, user_interactions

10.04.2025 19:49 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

"even though we know we see phenomena, not noumena (easily revealed by closing your eyes and drinking from the obscured cup) we still reach for phenomena and feel that we're adressing noumena" is, despite it all, something to be said in every generation

09.04.2025 00:01 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

some of the most incredible necroposts in the history of net forums are appearing in that lesswrong thread

08.04.2025 23:15 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1
Post image 08.04.2025 23:09 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image 08.04.2025 23:09 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image 08.04.2025 23:08 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image 08.04.2025 23:07 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

scribble backlog

08.04.2025 23:07 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

i have a hunch that adding architectural details to the hypernetwork's strict task will somehow make learning the effects of architectures explicit and therefore easier (with scale i suppose) than trying to avoid the topic. might make synthesizing training samples easier too!

26.03.2025 09:09 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

we can then train our hypernetwork synthesizer inside of this variable upscale ratio autoencoder's 'latents', and get very very very big networks 'S' by using our hypernetwork to *specify* networks within the compressed autoencoder format, then choose very large upscale ratio conditioning inputs

24.03.2025 06:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

for bonus points, if we want our S to be even bigger, we can train a weird neural-network-autoencoder to compress large output networks and small output networks into similarly-'sized' latents, using a conditional input to dictate upscaling factor from our latents.

24.03.2025 06:11 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

now with this suggestion already here, we have a kind of silly way to make a really big output network S!
all we need to do is let the hypernetworkization model write the output network serially, one relatively detailed tokenized chunk at a time!

24.03.2025 06:09 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

this means that a valid input to this network can go:
[QPROJ _ _ _ _ ] [K PROJ _ _ _ _ ] [V PROJ _ _ _ _ ] [activation] [swiglu] [FFN [hyper1 hyper2 hyper3]] [UNEMBED MATRIX _ _ _ _ _ _ _ _ _ _ ].
underscores are here to emphasize sequence-length of inputs.

24.03.2025 06:08 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

and make sure that the description of an input weight collection tokenizes boring operations like normalizers, feedforwards, activation functions, and hyperparameters for boring functions so that they have an explicit and consistent representation within its learned embeddings of datatypes.

24.03.2025 06:06 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

pass the hypernetwork a collection of trainable matrices (multi-'token'-spanning datatypes) with delimiters to separate matrices of variable size (see llava and wan21's autoencoder notes for variable size tiled input encodings) so it can ingest itself for recurrence...

24.03.2025 06:05 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

i'll do my best to try to find a way to make it substantially smaller than S.
basic tricks: compose the hypernet from factored matrices wherever possible (so its outputs are bigger than itself)

24.03.2025 06:04 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

you could say i'm a bit of a TESCREAList

24.03.2025 03:34 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@sqcu is following 19 prominent accounts