Vincent Carchidi's Avatar

Vincent Carchidi

@vcarchidi.bsky.social

Defense analyst. Tech policy. Have a double life in CogSci/Philosophy of Mind. (It's confusing. Just go with it.) All opinions entirely my own.

289 Followers  |  530 Following  |  1,076 Posts  |  Joined: 15.12.2023  |  2.47

Latest posts by vcarchidi.bsky.social on Bluesky

Silver's the kind of guy who can't help but reveal how bitter he is that he didn't succeed at something. I can't read that screenshot and think it's actually about The Journals

18.08.2025 17:33 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I just want people in tech and science to think seriously about the consequences of what they’re working on. You can’t just assume that it’s all broadly contributing to the greater good

18.08.2025 17:10 β€” πŸ‘ 27    πŸ” 3    πŸ’¬ 5    πŸ“Œ 0

Tbh I see this at least as often in the reverse, especially among people who use them to bypass skill building. A big reason why controlled studies should be the priority (not saying I always follow this myself FWIW, but vibes just ain't enough).

18.08.2025 13:55 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Inclined to agree with this, even though the preference for the Russians' perspective can be unbelievably blatant.

More than "having" anything on Trump, though, I think the best explanation remains that Trump sees attributes in Putin he would like others to see in himself.

17.08.2025 17:16 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I think in Sacks' case this was likely his view already but the atmosphere was too bullish to be forward about it (his favorability to Gulf chip exports is a tell). Chamath I'm not sure about.

17.08.2025 15:29 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image Post image

It is interesting that people close to president on tech policy and otherwise influential in SV are giving off a definite vibe shift after GPT-5.

17.08.2025 15:29 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

This is pretty much what the Institute for Progress is too (DC based)

16.08.2025 22:16 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Even in the new budget request, where they are going all in on making use of LLMs/GPTs, the running theme across projects is "these are great, but we basically can't use them for anything mission-critical unless they become robust to uncertainty, reliable, and explainable."

16.08.2025 19:49 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I read this a long while ago, so I'm a bit hazy, but yeah I remember this sort of odd framing of what DARPA is doing. Their whole deal with AI for 60-odd years has been human-machine symbiosis, and they're pretty clear about limitations. That Third Wave still ain't here yet, and they recognize it.

16.08.2025 19:49 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

doing mostly the opposite, so I really don't know how practical it is to markedly shift course. Maybe Google and Microsoft are better suited to handle that (as Meta once again restructures its AI lab).

16.08.2025 15:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

particularly smart decision. I think they've had an awful effect on the field, are intruding into areas where they are unnecessary, etc. BUT, if they are to survive, I would like them to become a normal company that sells products which perform as intended.

They have unfortunately spent 3 years...

16.08.2025 15:43 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Is AI hitting a wall? OpenAI’s underwhelming new GPT-5 model suggests progress is slowing β€” and competition is changing

I do think that the wisest thing the major AI firms can do right now is focus on applications, even if the underlying models are not progressing the way they were from 2018-2024. OpenAI's decision to have a router allocate compute usage actually strikes me as a...

www.ft.com/content/d012...

16.08.2025 15:43 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Something relevant I had forgotten about that you might also be interested in, particularly section 7:

arxiv.org/abs/2311.06189

16.08.2025 14:51 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

(limited for explanation, not talking capabilities)

16.08.2025 03:45 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I ultimately feel that computational modeling is inherently limited because no matter how human-like it is/becomes, it depends on human beings effectively forcing it to train on certain data, automatically removing the burden of autonomous selection in a dynamic, fluid environment.

16.08.2025 03:41 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Yep, I agree with that part. But it just seems like the theory needs to be in place *before* the modeling. Otherwise you get this backwards argument, where they acknowledge the BabyLM project isn't there yet, but just before were arguing LLMs challenge notions of inductive biases (??)

16.08.2025 03:41 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

I'm torn on "meet in the middle" papers like this because at least one of the authors has done great work elsewhere (Tom McCoy). But I also think they end up unintentionally watering down key concepts in the process. Haidt did that in moral psychology.

16.08.2025 03:34 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Yeah, the latter two are especially...off. Compositionality you can at least argue about. But productivity in particular seems like it implies they've overcome OOD problems, which I don't think they would claim.

16.08.2025 03:33 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

Because their whole thing, and you can see it in that paper, is basically working backwards from "well, the LLM's output has a human-like quality to it, therefore it has X, Y, and Z implication for this theory of human cognition." But nobody ever says that about, like, gameplaying systems.

16.08.2025 02:39 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I completely agree. I'm asking from their (the authors'/sympathetic scholars') perspective. If performance is enough to make us question a model of human cognition with LLMs, why is this never brought up in the cases of other human-level systems? Quite impressive ones at that!

16.08.2025 02:38 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Some others will argue "language" is internal, a generative procedure of some kind, more directly critical for thought (and intelligence by implication).

Both agree on language's cultural importance, etc.

bioling.psychopen.eu/index.php/bi...

16.08.2025 02:20 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Language is primarily a tool for communication rather than thought - PubMed Language is a defining characteristic of our species, but the function, or functions, that it serves has been debated for centuries. Here we bring recent evidence from neuroscience and allied discipli...

It's the "language" part that throws the wrench in here, not so much the "intelligence" part.

E.g. you can find arguments like below that language is a tool for communication, on the assumption that "language" is speech, signs, etc but critical for human culture.

pubmed.ncbi.nlm.nih.gov/38898296/

16.08.2025 02:20 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Yeah I get that feeling tbh

16.08.2025 01:16 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

less interesting than what it initially appeared to be. You would think the same would apply re: LLMs being trained on human outputs that have the benefit of *already being structured appropriately.*

16.08.2025 00:47 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Might ironically be an area where ML is a little less hypey than cogsci. Like when DeepMind releases a grandmaster-level chess model trained without tree search, it's an "oh shit" moment...until it sinks in that it was trained on the annotations of a SOTA model that *did* use search. Immediately...

16.08.2025 00:47 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

But idk, the whole thing's a little odd to me still, after everything. I assume we all agree that humans engage in certain kinds of cognition to excel at strategy games like Go or Diplomacy. But when a model matches or exceeds human-level performance, they are *not* models of our cognition? Hmm

16.08.2025 00:41 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1

to give the game up for a meaningful human-machine comparison, in that context. But in clarifying what an "inductive bias" is (agree with this goal), the fact that children identify certain data as relevant whereas LLMs can *only* be fed data for pre-training seems a good place to start.

16.08.2025 00:41 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

I guess why I'm thinking this is the inductive bias section in particular. Not totally clear on why this is a challenge per se. They very fairly acknowledge that LLMs require training on *symbolic outputs* in order to attain their own capabilities, but this would seem...

16.08.2025 00:41 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Whither symbols in the era of advanced neural networks? Some of the strongest evidence that human minds should be thought about in terms of symbolic systems has been the way they combine ideas, produce novelty, and learn quickly. We argue that modern neura...

Interesting paper (read admittedly a little quickly, but not as long as it looks).

Idk why this particular paper is making me think this, but nobody ever argues that Cicero is a model of human theory of mind or that AlphaGo is a model of human strategizing...

arxiv.org/abs/2508.05776

16.08.2025 00:41 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

Another one for the reading list in that case. Thanks!

16.08.2025 00:15 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@vcarchidi is following 20 prominent accounts