ouching at that one
03.08.2025 07:36 β π 1 π 0 π¬ 0 π 0@sqcu.bsky.social
I like paper and moving pictures. sqcu.dev (@sameQCU on twitter)
ouching at that one
03.08.2025 07:36 β π 1 π 0 π¬ 0 π 0ahhh now this is a videogame
24.07.2025 17:18 β π 1 π 0 π¬ 0 π 0it like, well, you see, i can't explain it.
there's a lot more code needed to improve really basic algorithmic features before there's time to show off what's working better than other image synthesizer tools.
these pictures should explain more than words can.
oh yeah the slider thingy
13.07.2025 04:28 β π 0 π 0 π¬ 1 π 0doing proper numbers
20.05.2025 03:36 β π 1 π 0 π¬ 0 π 0mythopoietic, some kind of actual man-vs-nature & man-vs-fallibility-of-oaths system of conflict
03.05.2025 01:58 β π 1 π 0 π¬ 1 π 0bsky.app/profile/norv...
"by scialabba in his shifting perception of hitchen's eloquence..."
dark souls item description prose
there are actually some ways you can use an advanced ai image generation model which has a really obvious motif (BROWN lol its SEPIA in here haha) to inform dataset design, filtering, annotation, and maybe some kinds of augmentation? but it gets trickier and more labor heavy.
18.04.2025 00:13 β π 1 π 0 π¬ 0 π 0unfortunately the statistical error signature to their model (dude BROWN) is so exquisitely varied in its presentation and manifest forms that they are really gonna struggle to use their current tech for synthetic data generation
18.04.2025 00:12 β π 1 π 0 π¬ 1 π 0learning to relax the tension/attachment/binding which yearns (both) to game (and to suppress the yearning for gaming) will release a drain on your attention which necessarily must be present both while gaming and not gaming
14.04.2025 23:46 β π 2 π 0 π¬ 0 π 0wu wei
relax expectations for accomplishment through deliberate effort
balance seemingly purposeful activities with seemingly purposeless ones (going on walks, attentively preparing simple foods, flickshotting heads) to better understand the cost and value of either
type of guy who does a wavelet transform on his kids and says 'look its still peaking second' to the stats poster
11.04.2025 03:36 β π 2 π 0 π¬ 0 π 0i was mortified for a moment that someone was writing arithmetic units from first principles and physically sighed with relief when it was factorio belts on the right hand side
11.04.2025 03:31 β π 0 π 0 π¬ 0 π 0sparse coding of emoji so that storing a large maximum number of emoji per post doesn't crank db storage costs to product: dictionary_size, user_interactions
10.04.2025 19:49 β π 1 π 0 π¬ 0 π 0"even though we know we see phenomena, not noumena (easily revealed by closing your eyes and drinking from the obscured cup) we still reach for phenomena and feel that we're adressing noumena" is, despite it all, something to be said in every generation
09.04.2025 00:01 β π 2 π 0 π¬ 1 π 0some of the most incredible necroposts in the history of net forums are appearing in that lesswrong thread
08.04.2025 23:15 β π 4 π 0 π¬ 1 π 1scribble backlog
08.04.2025 23:07 β π 2 π 1 π¬ 1 π 0i have a hunch that adding architectural details to the hypernetwork's strict task will somehow make learning the effects of architectures explicit and therefore easier (with scale i suppose) than trying to avoid the topic. might make synthesizing training samples easier too!
26.03.2025 09:09 β π 1 π 0 π¬ 0 π 0we can then train our hypernetwork synthesizer inside of this variable upscale ratio autoencoder's 'latents', and get very very very big networks 'S' by using our hypernetwork to *specify* networks within the compressed autoencoder format, then choose very large upscale ratio conditioning inputs
24.03.2025 06:13 β π 1 π 0 π¬ 0 π 0for bonus points, if we want our S to be even bigger, we can train a weird neural-network-autoencoder to compress large output networks and small output networks into similarly-'sized' latents, using a conditional input to dictate upscaling factor from our latents.
24.03.2025 06:11 β π 1 π 0 π¬ 2 π 0now with this suggestion already here, we have a kind of silly way to make a really big output network S!
all we need to do is let the hypernetworkization model write the output network serially, one relatively detailed tokenized chunk at a time!
this means that a valid input to this network can go:
[QPROJ _ _ _ _ ] [K PROJ _ _ _ _ ] [V PROJ _ _ _ _ ] [activation] [swiglu] [FFN [hyper1 hyper2 hyper3]] [UNEMBED MATRIX _ _ _ _ _ _ _ _ _ _ ].
underscores are here to emphasize sequence-length of inputs.
and make sure that the description of an input weight collection tokenizes boring operations like normalizers, feedforwards, activation functions, and hyperparameters for boring functions so that they have an explicit and consistent representation within its learned embeddings of datatypes.
24.03.2025 06:06 β π 1 π 0 π¬ 1 π 0pass the hypernetwork a collection of trainable matrices (multi-'token'-spanning datatypes) with delimiters to separate matrices of variable size (see llava and wan21's autoencoder notes for variable size tiled input encodings) so it can ingest itself for recurrence...
24.03.2025 06:05 β π 2 π 0 π¬ 1 π 0i'll do my best to try to find a way to make it substantially smaller than S.
basic tricks: compose the hypernet from factored matrices wherever possible (so its outputs are bigger than itself)
you could say i'm a bit of a TESCREAList
24.03.2025 03:34 β π 3 π 0 π¬ 0 π 0