Same team.
But it's healthy for the others to fight for the second place.
@pekka.bsky.social
Antiquated analog chatbot. Stochastic parrot of a different species. Not much of a self-model. Occasionally simulating the appearance of philosophical thought. Keeps on branching for now 'cause there's no choice. Also @pekka on T2 / Pebble.
Same team.
But it's healthy for the others to fight for the second place.
This seems to be an experiment for testing the idea on top of an existing model as an easier and cheaper way. Presumably results would be better and training also cheaper if they had trained it with this architecture from the scratch.
I don't see hints of them having already done that (with V4/R2).
Agreed. It looks very good. Long context memory requirements now seem to be a bigger issue than compute.
DeepSeek has again invented something new and significant, even before they managed to take advantage of their previous NSA architecture, which seemed significant enough as well.
I have been mostly quiet here for some time and noticed I have gained many followers. I thought people really like it when I shut up. But this explains it.
This explanation was also easy to find, since your account was the first I checked for what important things I have missed.
Human mind is ultimately just another neural network and copycat in the same sense.
I now tend to ask Gemini to do a peer review before I read new papers and most of the time it catches some issues the authors and peer reviewers have missed. We just tend to be more critical of AIs missing stuff.
But you presumably read abstracts or other summaries first for getting some idea if the research might be worth reading.
You can tell your interests to AIs and get better tailored summaries for improving that heuristic. Way better than one static generic one.
Isn't that the same with any abstract or summary, no matter who made it?
26.09.2025 11:00 β π 0 π 0 π¬ 1 π 0Tuollaiset virheet on kaikkien neuroverkkojen yleinen ominaisuus. IhmisillΓ€ se vaan hyvΓ€ksytÀÀn yleisesti sellaiseksi niin kauan kun ongelmat ei ole merkittΓ€vΓ€sti yleisempiΓ€ kuin muilla ihmisillΓ€. Me tarinoidaan ihan vastaavasti sitΓ€ mitΓ€ neuroverkko suoltaa, vailla sen parempaa perustaa.
26.09.2025 10:30 β π 2 π 1 π¬ 0 π 0You aren't reading a scientific paper but a bsky post that uses 'proof' in casual way, as evidenced by how it's described as an ongoing process.
But speaking of actual scientific papers, do you agree that the only example of LLM limitations in yours is testing separate image generation models?
That's pretty much how I view biological vs. artificial neural networks. All the low level details differ, but the fundamental structure and what it can achieve computationally is same.
07.09.2025 23:15 β π 1 π 0 π¬ 0 π 0Good one. π
07.09.2025 23:11 β π 1 π 0 π¬ 0 π 0Some are surprised that people like me like to have discussions with AIs. At least you can actually have discussions with them. AIs can actually defend viewpoints and cope with it if they can't. And remain rational while doing all that.
07.09.2025 23:10 β π 4 π 0 π¬ 0 π 0So is it about the view that digital seems like an approximation or more artificial than the more continuous analog world?
But fundamentally both AIs and brains are processing electric signals with limited precision.
I was blocked today by an author of a preprint after I pointed out some severe problems in it, with ample evidence.
That kind of blocking feels odd in scientific context. Like, you have published something erroneous, and that's how you deal with it?
Turns out the confirmed fear was somebody else's. My unguarded ravioli are now where they belong.
There were no survivors.
They ate my raviolis?
07.09.2025 21:34 β π 8 π 0 π¬ 2 π 0So your skepticism about computations is more about definitions or such? Those are pretty broad.
"A computation is any type of arithmetic or non-arithmetic calculation that is well-defined."
The meaningful comparison is in capabilities. Current LLMs already match and outperform us in many tasks. That makes them already comparable. And they can have much more knowledge than any individual human can.
07.09.2025 17:08 β π 1 π 0 π¬ 0 π 0You can't calculate it like that. For one, brains can't match the digital precision and reliability of ANNs, so ANNs can compress a lot more reliably to smaller number of parameters. On the other hand, biological neurons can contain multiple levels of computations in dendrites etc.
07.09.2025 17:07 β π 1 π 0 π¬ 1 π 0And if you want to believe there's something above but not reducible to those computations, as many believe about consciousness, you would have to connect that extra something to those known computations without violating the underlying laws of physics. Nobody has managed to do that.
07.09.2025 17:01 β π 2 π 0 π¬ 1 π 0It's a well established unavoidable fact that they compute. You can't really have something like a neuron without ending up doing a computation of some kind. And you can't avoid doing more complicated computations when those are connected.
So it's just a matter of what else they do.
Once I accepted brains are also just performing computations, there wasn't anything that would in principle prevent AI from overtaking us. Especially since we are the ones with physical limitations on how fast we can compute, how big our brains can be, and so on.
07.09.2025 16:23 β π 2 π 0 π¬ 1 π 0So what?
The issue here isn't why Gemini agrees with me but that the issues we agree on are real serious issues in the paper. And it's quite revealing that an author chose to block me instead of being able to defend it or admit the mistakes.
Did you read what I just said?
07.09.2025 16:01 β π 0 π 0 π¬ 1 π 0I was already pretty sure back in those days that one day AI will overtake us. But it was pretty much anybody's guess then if it will happen in my lifetime. Something like 50 years was probably a common estimate then.
We are living in a very special moment now!
I just said I read the relevant parts and found the issue myself, and also used Gemini for getting a peer-review, answers to questions, resulting a longer conversation.
So it's not just summarization and they aren't reasoners just for that skill, but their overall success in all kinds of tasks.
Possibly related:
"In the early 2000s, the field garnered increased scientific, political, and commercial attention that led to both controversy and progress."
It became a cool prefix back then?
And yet you decided I'm the one who needs to read more, even though I'm the one who immediately identified a serious issue with the paper, which the author apparently couldn't admit, even though it was proven by documentation of the relevant LLMs.
07.09.2025 15:47 β π 0 π 0 π¬ 1 π 0So you "genuinely encouraged" me to read papers, yet haven't read the paper we are talking about?
That screenshot was from the last post of a lengthy conversation with Gemini, what we agreed upon. Initial prompt, that already identified the issue with timelines was simply "Peer review this paper."
And surprise, surprise, even before this, the author, who used to follow me, has blocked me.
That's the best indication of true readiness to open discussion.