it's Space Quebec, because they speak a language with the same phonology as French which is incomprehensible to Francophones
06.05.2025 22:13 β π 221 π 26 π¬ 15 π 1@bilal2vec.bsky.social
twitter.com/bilaltwovec
it's Space Quebec, because they speak a language with the same phonology as French which is incomprehensible to Francophones
06.05.2025 22:13 β π 221 π 26 π¬ 15 π 1thereβs been some interesting work lately on multiscale autoregressive image modeling arxiv.org/abs/2404.029...
29.03.2025 02:55 β π 6 π 0 π¬ 1 π 0city of stairs and the tainted cup both very good π
28.03.2025 12:00 β π 1 π 0 π¬ 0 π 0once again coming crawling back to AdamW after every paper published after 2015 has failed me again
27.03.2025 04:24 β π 68 π 4 π¬ 6 π 0i think theyβre just starting to realize what they unleashed w the tweet
25.03.2025 12:43 β π 1 π 0 π¬ 0 π 0from the other app xd
25.03.2025 12:35 β π 1 π 1 π¬ 0 π 0Love to log on to the Horrors app to catch up on today's Horrors
15.03.2025 18:27 β π 20 π 3 π¬ 1 π 0weβre doing ai exorcisms in 2025 huh
04.03.2025 19:29 β π 8 π 0 π¬ 0 π 0like one of the big things about 404 media, Brian Merchant, Paris Marx, Ed Zitron, etc., is that they neither know nor care how the subject of their criticism actually works
23.01.2025 20:18 β π 179 π 15 π¬ 12 π 10I wish academic ML was a bit more skeptical of papers and less skeptical of industry. I get that it sucks to not have visibility on details, but it doesnβt invalidate the results. On the flip side, there are too many papers whose message are parroted despite sketchy experiments.
11.01.2025 21:47 β π 9 π 2 π¬ 3 π 0weβre going shopping
09.01.2025 04:38 β π 1 π 0 π¬ 0 π 0an LLM that uses streetview to pre-drive down the route and assemble comments like "at the big red barn, turn left" "when you get to the sorta squiggly road, take the exit" like a farmer would
09.01.2025 00:44 β π 6 π 3 π¬ 0 π 0if you squint hard enough everything in ml is either a special case of the KL div or newtonβs method
04.01.2025 16:27 β π 3 π 0 π¬ 0 π 0a lot of machine learning research is about discovering which parts of mathematics are actually L2 regularization and which parts of mathematics are actually Adam
04.01.2025 06:59 β π 76 π 5 π¬ 6 π 1justine tunney the libc mutex micro optimizations person??
19.12.2024 05:21 β π 23 π 0 π¬ 1 π 1This guy needs to read Manufacturing Consent! Youβre not supposed to do this yourself you gotta hire editors who already agree with you, this is amateur hour shitβ¦
12.12.2024 22:00 β π 6 π 2 π¬ 3 π 0pov: post training researchers learning what pretraining researchers do while waiting for the model to train
06.12.2024 22:21 β π 4 π 0 π¬ 0 π 0accidentally typed rm -fr and iβm using that now
03.12.2024 03:16 β π 613 π 43 π¬ 46 π 13congrats!!
03.12.2024 19:16 β π 1 π 0 π¬ 0 π 0thanks for cleaning it up
01.12.2024 01:57 β π 1 π 0 π¬ 0 π 0ai generated slavoj zizek voice on slop video of some bizarre rural chinese cooking
30.11.2024 20:45 β π 58 π 2 π¬ 1 π 1incredible new forms of postings emerging
01.12.2024 01:55 β π 3 π 0 π¬ 1 π 0interesting is there anywhere i can read more about this
28.11.2024 20:25 β π 1 π 0 π¬ 1 π 0its all approximating numbers w other numbers all the way down. everything else is an implementation detail! π
28.11.2024 18:43 β π 3 π 0 π¬ 1 π 0if your values do matter replace them w values similar to them aka ones (parameter sharing / shared kv cache / factorizing a large matrix into two small ones / lora / adafactor)
28.11.2024 18:42 β π 4 π 0 π¬ 1 π 0i love how every efficiency advance in machine learning is approximating [expensive operation] by either ones (just pass it straight through) or zeros (doesnβt matter, just donβt compute it/sparsity)
28.11.2024 18:38 β π 20 π 1 π¬ 2 π 0