Just wait till you hear about 0's decision...
03.08.2025 21:54 β π 0 π 0 π¬ 0 π 0@ntraft.bsky.social
PhD student at the Vermont Complex Systems Institute. Interested in ML, evolution, self-organization, & collective intelligence. http://ntraft.com https://t.co/ja3fdtRLdM
Just wait till you hear about 0's decision...
03.08.2025 21:54 β π 0 π 0 π¬ 0 π 0Even here, they miss the point. They somehow don't realize there is no universal alignment. Our species is a collectiveβa collective of collectives, even. To be "aligned" is just to be part of the collective; an integrated whole. Which requires us *all* to participate in building & integrating AI.
03.08.2025 16:23 β π 2 π 0 π¬ 0 π 0OpenAI, of all companies, should know this.
Ah, but they were not *exactly* created to further AI technology. They were created specifically to secure against the mythical AI End Times, the Apocalypse. So they closed down rather than opening up.
The best thing a company could possibly do is to create an ecosystem around its technology. Everyone claims to want to be "a platform, not a product". But their behavior belies their big words.
"Platform" should mean an open foundation for building. Not a limited-use API gateway.
With its focus on short term profitability, the US has forgotten how to make markets competitive. And companies have forgotten that they are only *vehicles* for the final product, not ends in themselves.
03.08.2025 16:23 β π 1 π 0 π¬ 1 π 0As long as we agree that no state space is truly markovian, I can get behind that! π
03.08.2025 12:13 β π 1 π 0 π¬ 1 π 0What do you mean exactly? Can a state space be fully observable, yet incomplete? Or are you just referring to unknown transition function or something else?
03.08.2025 11:10 β π 0 π 0 π¬ 1 π 0Good news, those are all partially observable, so you get to keep those! π
In retrospect I should have said RL should never be *framed* any other way... Except for maybe a chess game.
Iβm very excited to announce that Iβve just signed a contract with @princetonupress.bsky.social for a new book, tentatively titled βThe Genomic Codeβ π π
01.08.2025 17:10 β π 126 π 9 π¬ 6 π 0In reality, all environments are only partially observable, so this is the only regime in which RL should be evaluated.
03.08.2025 00:42 β π 4 π 0 π¬ 2 π 0Hmm, I guess at that point I'd call it something other than "LLM" and "Transformer"; even if we accidentally use the same name it seems to me that would constitute a significantly different system. So I still think this is a core justification for why the current paradigm will never be enough.
31.07.2025 12:31 β π 2 π 0 π¬ 1 π 0Guessing you're talking about a situation where there are multiple equally-good opportunities to turn, so Google should say something like, "if current light is green, keep going; else, take the turn now"?
31.07.2025 03:16 β π 1 π 0 π¬ 1 π 0I've always held that the ideal attitude for a PhD is, "the experience of doing a PhD is worthwhile in and of itself, regardless of what comes after". Even if you continue as a researcher, you may never have as much freedom to learn/explore as you do now. (Assuming a good advisor.)
31.07.2025 03:10 β π 2 π 1 π¬ 1 π 0Send like a useful framework. Looking forward to reading this new position paper.
30.07.2025 00:31 β π 0 π 0 π¬ 0 π 0Wonder if a truck and a car count as different embodiments? Probably still workable even if not. But maybe sedan & pickup = same embodiment; tractor-trailer = different embodiment.
29.07.2025 22:09 β π 0 π 0 π¬ 0 π 0I just realized that the "AI scientist" vision is all about this "collecting of facts", like packrats or crows.
bsky.app/profile/ntra...
And you would think that this would finally be our chance to focus on that now that we have language models! They could actually be very good at explanationβ¦ If they actually βunderstoodβ, that is.
29.07.2025 17:58 β π 2 π 0 π¬ 0 π 0It's quite revealing... in tech circles nowadays we are so fixed on the conception of "science as innovation", completely leaving behind the other half of science, fundamental understanding.
29.07.2025 17:58 β π 2 π 0 π¬ 1 π 0I just realized that all these hyped-up βAI scientistβ concepts all concentrate on the idea of *discovery*βnew algorithms, new materials, new productsβwithout exception. Not a one of them focus on *understanding*βarguably the larger role of a scientist! π§
29.07.2025 17:58 β π 3 π 0 π¬ 1 π 1So, you need *some* adaptability to changes in body type. But I don't know exactly how far "one brain, any embodiment" folks would want to push that. It does sound extreme.
29.07.2025 17:34 β π 2 π 0 π¬ 0 π 0On the flip side, if each brain is so specific to its embodiment that, e.g., every time you want to tweak the sensors on your self-driving car you need to *completely re-gather all data from all cities and make all new maps*, then this would be pretty devastating to that use case, no?
29.07.2025 17:34 β π 2 π 0 π¬ 2 π 0Ehhhh, no I unfortunately don't think so, because if someone is doing "the right thing for the wrong reasons" then they will absolutely be led astray into doing the wrong thing, usually sooner rather than later. Depends on the severity of their wrongness, perhaps.
29.07.2025 00:57 β π 3 π 0 π¬ 1 π 0Team from the University of Vermont + External faculty posing for a picture by a river
VCSI executive director announcing location for the conference in 2026, standing on a large stage with colleagues
female scholar presenting a parallel talk, engaging with the audience
PhD student presenting a research poster
Proud of our team's talks and posters at #IC2S2 in NorrkΓΆping Sweden - what a great week. We're excited to host next year in Vermont!
25.07.2025 11:13 β π 10 π 3 π¬ 0 π 1We are going to spend the next few years finding out exactly *why* it was a horrible idea to unleash a tsunami of vibe-coded apps made by idiots and scammers on an unprepared world.
Welcome to the Entirely Foreseeable AI Consequences Era.
Apparently Papers With Code was abruptly sunsetted for reasons that are unclear. π’
26.07.2025 11:33 β π 3 π 0 π¬ 0 π 0π So sad to see paperswithcode is discontinued β but grateful that the @hf.co team is, as always, stepping up to support the community!
It was an incredible resource for use cases, common themes in papers, and visualizing how models have improved on evals over time:
huggingface.co/papers/trend...
Been studying Mixture of Experts recently. It's baffling to learn that typically different tokens are routed to *different* experts. How can you be an "expert" at a particular word?? I suspect this name lends us very poor intuition for what these models actually do!
(shows 2 tokens and 4 "experts")
There are people, in tech (and now in the government!), who will mislead you about what current AI models are capable of. If we don't call them out, they'll drag us all down.
23.07.2025 20:01 β π 20 π 6 π¬ 3 π 0more evidence for βa bunch of AI cultists dismantled large parts of the federal government because they believed that their LLMs were intelligent and could do the work betterβ thesis
23.07.2025 17:51 β π 8859 π 2676 π¬ 171 π 78He's absolutely drunk with power... and summer tomatoes
24.07.2025 00:45 β π 1 π 0 π¬ 0 π 0