Well I'd be lying if I said I didn't feel that...
16.10.2025 02:25 β π 1 π 0 π¬ 1 π 0@vcarchidi.bsky.social
Defense analyst. Tech policy. Have a double life in CogSci/Philosophy of Mind. (It's confusing. Just go with it.) https://philpeople.org/profiles/vincent-carchidi All opinions entirely my own.
Well I'd be lying if I said I didn't feel that...
16.10.2025 02:25 β π 1 π 0 π¬ 1 π 0Plus, of course the academic job market, which seems to be not great pretty much everywhere. Though I suspect some regions/countries will improve with some time. In the US, it'll be a while.
16.10.2025 01:58 β π 0 π 0 π¬ 1 π 0Been there. Was going to go earlier this year but life stuff kept me from it.
Will say that not all schools will require three letters (not that they don't help, even if optional). The I was looking at didn't require any (though I already knew some folks there). The "what to study" part...not easy.
Very interesting abstract, looking forward to reading this one.
Has echoes of this paper on the performance stability of LLMs on mathematics benchmarks and problems with reproducibility.
arxiv.org/abs/2504.07086
"a market crash today is unlikely to result in the brief and relatively benign economic downturn that followed the dotcom bust. There is a lot more wealth on the line nowβand much less policy space to soften the blow of a correction."
15.10.2025 13:06 β π 4 π 0 π¬ 0 π 0Yeah I hear that, and I haven't read his most recent stuff. But IIRC the survival of OpenAI was often bound up in the survival of either SV itself or major firms like Microsoft. I remember a flair of "GenAI is SV's last idea, and if it goes belly up, so does SV."
14.10.2025 19:35 β π 2 π 0 π¬ 0 π 0I think Zitron is probably right about that, but he seems to have much grander things in mind than just this one company not surviving, no?
14.10.2025 19:25 β π 0 π 0 π¬ 1 π 0Vibes of Fukuyama
14.10.2025 18:05 β π 5 π 0 π¬ 0 π 0bsky.app/profile/vcar...
14.10.2025 16:22 β π 1 π 0 π¬ 0 π 0Would add a willingness to disagree and be criticized to this.
14.10.2025 14:04 β π 3 π 0 π¬ 0 π 0A good take from Martin Peers.
14.10.2025 00:24 β π 1 π 1 π¬ 0 π 0I could also see an argument that ties them both together - an intelligent system, free of our (human) deficiencies, would not hallucinate, could reliably apply the algorithm(s) responsible for its training data *because* it has none of our frailties, etc.
13.10.2025 21:12 β π 0 π 0 π¬ 0 π 0A bit of an issue is that the leading voices in support of Neuro-Symbolic AI have often made the case for it in reference to "real" intelligence or whatever.
I could see plausible arguments for or against that, but it adds to the confusion - is it for "real" intelligence or greater applicability?
Good piece to get your bearings on the bubble talk.
The gist is that economists/organizations are themselves debating this, and that AI risks are intersecting with other potentially destabilizing factors like political pressure on the US Federal Reserve.
www.ft.com/content/fe47...
Yeah in retrospect, I've done this myself in the past.
Very much an empirical question, but anecdotally, I've had someone tell me that GPT-4o not following their image generation instructions fully was the model exercising its "artistic liberty."
The shift discussed from RL in a traditional ML context to Neuro-Symbolic is worth paying attention to...not confined to this research team.
12.10.2025 12:25 β π 1 π 1 π¬ 0 π 0On the defense angle, this is pretty good.
podcasts.apple.com/us/podcast/h...
I've always assumed LLM-Modulo is most promising for verification on narrow-ish problems, but not quite as narrow as problems GOFAI was put toward. Boosting accuracy with the increased flexibility of LLMs as generators. Would follow the trend of successful N-S approaches being mostly specialized.
12.10.2025 12:15 β π 1 π 0 π¬ 1 π 0Yeah this is a good point...I suppose then the question is whether there's any real baggage attached to that kind of talk. If pressed, do people still say it's thinking or do they default to just saying they don't know? Probably varies quite a bit.
11.10.2025 21:14 β π 1 π 0 π¬ 0 π 0I also think, real progress notwithstanding, the reasoning models extend a tendency to not ask: what is language? The assumption is that language is obvious: a verbalized thought process.
Some thoughts on that here:
philarchive.org/rec/CARTCA-19
Academically, I think the Kambhampati-led ASU group has been notably level headed about this. (And I am partial to how he specifically tends to approach the study of language models, i.e. drawing first from compsci instead of searching for humanity in them.)
arxiv.org/abs/2504.09762
My opinion is that calling the intermediate tokens "Chains-of-Thought" has effectively turned reasoning models into a collective Rorschach test that nobody asked to take.
11.10.2025 20:49 β π 4 π 0 π¬ 1 π 0Descriptively, I think things may go toward something along these lines, at least among the public. They "think," but not like us.
Though it also depends quite a bit on commercial dynamics. Would the general public say ChatGPT is "thinking" if that's not what the interface said? (I don't know).
The 2024 US elections were supposed to be at risk of AI-generated misinformation leading to a crisis. We're about a year out from that, and the quality of the misinfo has only gotten better. Not saying it's not a problem (obviously is), but I think the problem is more about how to judge sources.
11.10.2025 14:34 β π 1 π 0 π¬ 0 π 0I have no idea how this'll look a decade from now, but yeah, more realistic misinformation hasn't really led to the info apocalypse that was predicted. I think it's just leading to a more banal (still bad) situation where the internet is filled with more garbage than before.
11.10.2025 14:29 β π 4 π 0 π¬ 1 π 0One thing which I'd be happy to get input on is, just to be blunt, I do find the urge to automate a person's life without consulting them in some systematic ways about it first a little odd...especially if this leads to dismay that non-AI people don't love it. Clashes with virtuous tech posturing.
10.10.2025 22:00 β π 2 π 0 π¬ 0 π 0Agree with the sentiment, but I do think a number of people (not OP) who say "I want robots to do my dishes so I can do art" don't actually want their manual chores automated.
And, if we're being charitable, they may not be wrong to worry about all household chores being automated. Time will tell.
I'd add that the US govt may behave in reaction to a crash in what we might call non-traditional ways. Idk how that impacts everything else.
10.10.2025 20:38 β π 5 π 0 π¬ 1 π 0I think there's ample room to argue that the tech is not useless but the correction would still not be quick.
My disclaimer is that I have no idea how it'll play out, but difficult for me to not see the risk of a painful period of correction.
This gets into some much thornier issues about what it is we're actually doing when trying to explain something. And that's a can of worms (there's so many possible angles). Though I'd argue it is one way of many ways to raise the importance of rigorous theory construction that's often neglected.
10.10.2025 18:51 β π 1 π 0 π¬ 0 π 0