Interview from the Titan submarine investigation. The witness name is blacked out. The first line is "Well, I'm sure you're familiar with my film Titanic."
Great moments in redaction history.
18.10.2025 04:35 β π 419 π 79 π¬ 16 π 8@adverb.bsky.social
ML, Psychology, Art, Materials Informatics, Espresso, &c. he/him
Interview from the Titan submarine investigation. The witness name is blacked out. The first line is "Well, I'm sure you're familiar with my film Titanic."
Great moments in redaction history.
18.10.2025 04:35 β π 419 π 79 π¬ 16 π 8The power of even just this frame lmao
18.10.2025 04:45 β π 1 π 0 π¬ 0 π 0A man's face is shaved by a robot
18.10.2025 04:44 β π 0 π 0 π¬ 1 π 0It's gonna be a long forever π«©
17.10.2025 16:43 β π 2 π 0 π¬ 0 π 0Zitron diaper-posting will always be more interesting to normal people than serious, thoughtful critiques. This is bad news but it's the state of the world. Hopefully policy-makers know better; I doubt it.
17.10.2025 07:53 β π 7 π 0 π¬ 0 π 0Serious critiques of "AI" just will not exist in the mainstream discourse until the topic itself becomes very boring.
17.10.2025 07:53 β π 10 π 0 π¬ 2 π 0also: "ah yes, return_tensors='pt' can give you a numpy array"
17.10.2025 07:43 β π 0 π 0 π¬ 0 π 0baffling to me that huggingface still has error messages that are just straight-up incorrect after they were reported three years ago
17.10.2025 07:06 β π 4 π 0 π¬ 1 π 0We're looking for 2D visual artists (in US/Canada) to collaborate on a research project, co-designing a new art feedback tool.
It's a remote, compensated opportunity.
Details & apply: rise.csit.carleton.ca/studies/arti...
Approved by Carleton University Research Ethics Board (CUREB-B #124851).
If I were to just ask it to give me a completion on the actual site, it'd be so much better every time.
17.10.2025 06:32 β π 1 π 0 π¬ 1 π 0gpt5 code like uuuhhh gpt3
17.10.2025 06:32 β π 1 π 0 π¬ 1 π 0yeah they are so much noticeably worse at everything, esp coding on cursor. I don't understand it. Maybe something about how the tool-use is done? Baffling to me.
17.10.2025 06:31 β π 1 π 0 π¬ 1 π 0Being yelled at in-person is a bad experience even for people who are big strong secure self actualized whatever, and I think people can typically see that but putting it online somehow obscures it to them.
17.10.2025 06:29 β π 3 π 0 π¬ 1 π 0Having been in/under it a few times, I think another aspect is that people take social conflict & online conflict especially as "whatever, fighting is good/neutral" when the reality is that at any scale on social media it's usually an unproductive stressor!
17.10.2025 06:27 β π 3 π 0 π¬ 1 π 0oh we're gonna *lose* lose the next big war
16.10.2025 21:11 β π 876 π 120 π¬ 30 π 3Even though the models run many forward passes, each one is a single case of "glimpse this noisy image and do a forward pass of denoising" which I don't think is sufficiently connecting the timesteps to learn hierarchical integration.
17.10.2025 06:20 β π 1 π 0 π¬ 0 π 0It even points to diffusion models' failure cases as an example, which ofc I love.
17.10.2025 06:20 β π 1 π 0 π¬ 1 π 0Things like HRM & TRM almost feel like the answer for models actually learning to, through recursion that's reasonably connected by gradient, sort through hierarchical integration.
arxiv.org/abs/2508.15082 has imho some interesting thoughts on the symbolic side
I point out that his research ideas are compelling because I'm absolutely not belittling his contributions! But making grand predictions and having to be wrong about them consistently just shifts the focus so badly.
17.10.2025 06:08 β π 1 π 0 π¬ 0 π 0bsky.app/profile/adve...
17.10.2025 06:06 β π 0 π 0 π¬ 1 π 0I'm in this weird place of seeing models completely fall over at generalizing concepts that're all over the place in training data while also thinking that "model should explicitly use if p, then q" is cuuurazy
17.10.2025 06:04 β π 2 π 0 π¬ 2 π 0It's so odd because lecture title linked down-thread would make you think that he'd see "bag of tricks that a connectionist system mysteriously learns" is cool and yet...
17.10.2025 06:01 β π 2 π 0 π¬ 1 π 0big fan of neruosymbolic systems, bitter lessons, and "ai bubble bursts in 1 year" every 1 year.
17.10.2025 05:58 β π 1 π 0 π¬ 1 π 1Like this is awesome: digitallibrary.amnh.org/items/21dd36...
but instead he's "guy who makes predictions about the state of 'AI' in two years and is consistently wrong"
Yeah, the way that LLMs can make the most insidious code drives me bananas. cursor autocomplete has almost ruined my life for a week many times!
17.10.2025 05:54 β π 1 π 0 π¬ 1 π 0Developmental psych rules. It is absolutely a perfect match for producing better ML architectures, tasks, whole paradigms. But he just keeps saying things that are proven not true lol
17.10.2025 05:54 β π 2 π 0 π¬ 1 π 0What's funny about this is that, given his cognitive science papers, I'd probably be really interested in his takes if this weren't just the case.
17.10.2025 05:52 β π 1 π 0 π¬ 1 π 0Gary Marcus is mostly known for making incorrect predictions.
17.10.2025 05:49 β π 7 π 0 π¬ 2 π 0But I think this llm-cum-author was like seasoned-ish at programming. eh idek
17.10.2025 05:48 β π 1 π 0 π¬ 1 π 0If it weren't for the fact that 20 de facto juniors and 30 other researchers would end up using it, would have dumped 90% of the code from that specific ~LLM~. Took normal code and made it 100 times more convoluted and brittle.
17.10.2025 05:47 β π 1 π 0 π¬ 1 π 0