Silver's the kind of guy who can't help but reveal how bitter he is that he didn't succeed at something. I can't read that screenshot and think it's actually about The Journals
18.08.2025 17:33 β π 0 π 0 π¬ 0 π 0@vcarchidi.bsky.social
Defense analyst. Tech policy. Have a double life in CogSci/Philosophy of Mind. (It's confusing. Just go with it.) All opinions entirely my own.
Silver's the kind of guy who can't help but reveal how bitter he is that he didn't succeed at something. I can't read that screenshot and think it's actually about The Journals
18.08.2025 17:33 β π 0 π 0 π¬ 0 π 0I just want people in tech and science to think seriously about the consequences of what theyβre working on. You canβt just assume that itβs all broadly contributing to the greater good
18.08.2025 17:10 β π 27 π 3 π¬ 5 π 0Tbh I see this at least as often in the reverse, especially among people who use them to bypass skill building. A big reason why controlled studies should be the priority (not saying I always follow this myself FWIW, but vibes just ain't enough).
18.08.2025 13:55 β π 1 π 0 π¬ 1 π 0Inclined to agree with this, even though the preference for the Russians' perspective can be unbelievably blatant.
More than "having" anything on Trump, though, I think the best explanation remains that Trump sees attributes in Putin he would like others to see in himself.
I think in Sacks' case this was likely his view already but the atmosphere was too bullish to be forward about it (his favorability to Gulf chip exports is a tell). Chamath I'm not sure about.
17.08.2025 15:29 β π 2 π 0 π¬ 0 π 0It is interesting that people close to president on tech policy and otherwise influential in SV are giving off a definite vibe shift after GPT-5.
17.08.2025 15:29 β π 4 π 1 π¬ 1 π 0This is pretty much what the Institute for Progress is too (DC based)
16.08.2025 22:16 β π 1 π 0 π¬ 0 π 0Even in the new budget request, where they are going all in on making use of LLMs/GPTs, the running theme across projects is "these are great, but we basically can't use them for anything mission-critical unless they become robust to uncertainty, reliable, and explainable."
16.08.2025 19:49 β π 2 π 0 π¬ 0 π 0I read this a long while ago, so I'm a bit hazy, but yeah I remember this sort of odd framing of what DARPA is doing. Their whole deal with AI for 60-odd years has been human-machine symbiosis, and they're pretty clear about limitations. That Third Wave still ain't here yet, and they recognize it.
16.08.2025 19:49 β π 2 π 0 π¬ 1 π 0doing mostly the opposite, so I really don't know how practical it is to markedly shift course. Maybe Google and Microsoft are better suited to handle that (as Meta once again restructures its AI lab).
16.08.2025 15:43 β π 0 π 0 π¬ 0 π 0particularly smart decision. I think they've had an awful effect on the field, are intruding into areas where they are unnecessary, etc. BUT, if they are to survive, I would like them to become a normal company that sells products which perform as intended.
They have unfortunately spent 3 years...
I do think that the wisest thing the major AI firms can do right now is focus on applications, even if the underlying models are not progressing the way they were from 2018-2024. OpenAI's decision to have a router allocate compute usage actually strikes me as a...
www.ft.com/content/d012...
Something relevant I had forgotten about that you might also be interested in, particularly section 7:
arxiv.org/abs/2311.06189
(limited for explanation, not talking capabilities)
16.08.2025 03:45 β π 1 π 0 π¬ 1 π 0I ultimately feel that computational modeling is inherently limited because no matter how human-like it is/becomes, it depends on human beings effectively forcing it to train on certain data, automatically removing the burden of autonomous selection in a dynamic, fluid environment.
16.08.2025 03:41 β π 1 π 0 π¬ 1 π 0Yep, I agree with that part. But it just seems like the theory needs to be in place *before* the modeling. Otherwise you get this backwards argument, where they acknowledge the BabyLM project isn't there yet, but just before were arguing LLMs challenge notions of inductive biases (??)
16.08.2025 03:41 β π 1 π 0 π¬ 2 π 0I'm torn on "meet in the middle" papers like this because at least one of the authors has done great work elsewhere (Tom McCoy). But I also think they end up unintentionally watering down key concepts in the process. Haidt did that in moral psychology.
16.08.2025 03:34 β π 1 π 0 π¬ 1 π 0Yeah, the latter two are especially...off. Compositionality you can at least argue about. But productivity in particular seems like it implies they've overcome OOD problems, which I don't think they would claim.
16.08.2025 03:33 β π 1 π 0 π¬ 2 π 0Because their whole thing, and you can see it in that paper, is basically working backwards from "well, the LLM's output has a human-like quality to it, therefore it has X, Y, and Z implication for this theory of human cognition." But nobody ever says that about, like, gameplaying systems.
16.08.2025 02:39 β π 1 π 0 π¬ 1 π 0I completely agree. I'm asking from their (the authors'/sympathetic scholars') perspective. If performance is enough to make us question a model of human cognition with LLMs, why is this never brought up in the cases of other human-level systems? Quite impressive ones at that!
16.08.2025 02:38 β π 1 π 0 π¬ 1 π 0Some others will argue "language" is internal, a generative procedure of some kind, more directly critical for thought (and intelligence by implication).
Both agree on language's cultural importance, etc.
bioling.psychopen.eu/index.php/bi...
It's the "language" part that throws the wrench in here, not so much the "intelligence" part.
E.g. you can find arguments like below that language is a tool for communication, on the assumption that "language" is speech, signs, etc but critical for human culture.
pubmed.ncbi.nlm.nih.gov/38898296/
Yeah I get that feeling tbh
16.08.2025 01:16 β π 0 π 0 π¬ 0 π 0less interesting than what it initially appeared to be. You would think the same would apply re: LLMs being trained on human outputs that have the benefit of *already being structured appropriately.*
16.08.2025 00:47 β π 5 π 0 π¬ 0 π 0Might ironically be an area where ML is a little less hypey than cogsci. Like when DeepMind releases a grandmaster-level chess model trained without tree search, it's an "oh shit" moment...until it sinks in that it was trained on the annotations of a SOTA model that *did* use search. Immediately...
16.08.2025 00:47 β π 4 π 0 π¬ 1 π 0But idk, the whole thing's a little odd to me still, after everything. I assume we all agree that humans engage in certain kinds of cognition to excel at strategy games like Go or Diplomacy. But when a model matches or exceeds human-level performance, they are *not* models of our cognition? Hmm
16.08.2025 00:41 β π 2 π 0 π¬ 1 π 1to give the game up for a meaningful human-machine comparison, in that context. But in clarifying what an "inductive bias" is (agree with this goal), the fact that children identify certain data as relevant whereas LLMs can *only* be fed data for pre-training seems a good place to start.
16.08.2025 00:41 β π 3 π 0 π¬ 1 π 0I guess why I'm thinking this is the inductive bias section in particular. Not totally clear on why this is a challenge per se. They very fairly acknowledge that LLMs require training on *symbolic outputs* in order to attain their own capabilities, but this would seem...
16.08.2025 00:41 β π 1 π 0 π¬ 1 π 0Interesting paper (read admittedly a little quickly, but not as long as it looks).
Idk why this particular paper is making me think this, but nobody ever argues that Cicero is a model of human theory of mind or that AlphaGo is a model of human strategizing...
arxiv.org/abs/2508.05776
Another one for the reading list in that case. Thanks!
16.08.2025 00:15 β π 1 π 0 π¬ 0 π 0