Pekka Lund's Avatar

Pekka Lund

@pekka.bsky.social

Antiquated analog chatbot. Stochastic parrot of a different species. Not much of a self-model. Occasionally simulating the appearance of philosophical thought. Keeps on branching for now 'cause there's no choice. Also @pekka on T2 / Pebble.

2,578 Followers  |  548 Following  |  8,271 Posts  |  Joined: 03.07.2023  |  2.4972

Latest posts by pekka.bsky.social on Bluesky

Same team.

But it's healthy for the others to fight for the second place.

02.10.2025 18:41 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This seems to be an experiment for testing the idea on top of an existing model as an easier and cheaper way. Presumably results would be better and training also cheaper if they had trained it with this architecture from the scratch.

I don't see hints of them having already done that (with V4/R2).

02.10.2025 12:47 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Agreed. It looks very good. Long context memory requirements now seem to be a bigger issue than compute.

DeepSeek has again invented something new and significant, even before they managed to take advantage of their previous NSA architecture, which seemed significant enough as well.

02.10.2025 12:43 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

I have been mostly quiet here for some time and noticed I have gained many followers. I thought people really like it when I shut up. But this explains it.

This explanation was also easy to find, since your account was the first I checked for what important things I have missed.

02.10.2025 12:37 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Human mind is ultimately just another neural network and copycat in the same sense.

I now tend to ask Gemini to do a peer review before I read new papers and most of the time it catches some issues the authors and peer reviewers have missed. We just tend to be more critical of AIs missing stuff.

29.09.2025 12:50 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

But you presumably read abstracts or other summaries first for getting some idea if the research might be worth reading.

You can tell your interests to AIs and get better tailored summaries for improving that heuristic. Way better than one static generic one.

26.09.2025 11:25 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Isn't that the same with any abstract or summary, no matter who made it?

26.09.2025 11:00 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Tuollaiset virheet on kaikkien neuroverkkojen yleinen ominaisuus. IhmisillÀ se vaan hyvÀksytÀÀn yleisesti sellaiseksi niin kauan kun ongelmat ei ole merkittÀvÀsti yleisempiÀ kuin muilla ihmisillÀ. Me tarinoidaan ihan vastaavasti sitÀ mitÀ neuroverkko suoltaa, vailla sen parempaa perustaa.

26.09.2025 10:30 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

You aren't reading a scientific paper but a bsky post that uses 'proof' in casual way, as evidenced by how it's described as an ongoing process.

But speaking of actual scientific papers, do you agree that the only example of LLM limitations in yours is testing separate image generation models?

08.09.2025 22:20 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

That's pretty much how I view biological vs. artificial neural networks. All the low level details differ, but the fundamental structure and what it can achieve computationally is same.

07.09.2025 23:15 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Good one. πŸ˜‚

07.09.2025 23:11 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Some are surprised that people like me like to have discussions with AIs. At least you can actually have discussions with them. AIs can actually defend viewpoints and cope with it if they can't. And remain rational while doing all that.

07.09.2025 23:10 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

So is it about the view that digital seems like an approximation or more artificial than the more continuous analog world?

But fundamentally both AIs and brains are processing electric signals with limited precision.

07.09.2025 22:56 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I was blocked today by an author of a preprint after I pointed out some severe problems in it, with ample evidence.

That kind of blocking feels odd in scientific context. Like, you have published something erroneous, and that's how you deal with it?

07.09.2025 22:49 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Turns out the confirmed fear was somebody else's. My unguarded ravioli are now where they belong.

There were no survivors.

07.09.2025 22:12 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

They ate my raviolis?

07.09.2025 21:34 β€” πŸ‘ 8    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
Computation - Wikipedia

So your skepticism about computations is more about definitions or such? Those are pretty broad.

"A computation is any type of arithmetic or non-arithmetic calculation that is well-defined."

07.09.2025 17:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The meaningful comparison is in capabilities. Current LLMs already match and outperform us in many tasks. That makes them already comparable. And they can have much more knowledge than any individual human can.

07.09.2025 17:08 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

You can't calculate it like that. For one, brains can't match the digital precision and reliability of ANNs, so ANNs can compress a lot more reliably to smaller number of parameters. On the other hand, biological neurons can contain multiple levels of computations in dendrites etc.

07.09.2025 17:07 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

And if you want to believe there's something above but not reducible to those computations, as many believe about consciousness, you would have to connect that extra something to those known computations without violating the underlying laws of physics. Nobody has managed to do that.

07.09.2025 17:01 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

It's a well established unavoidable fact that they compute. You can't really have something like a neuron without ending up doing a computation of some kind. And you can't avoid doing more complicated computations when those are connected.

So it's just a matter of what else they do.

07.09.2025 17:01 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Once I accepted brains are also just performing computations, there wasn't anything that would in principle prevent AI from overtaking us. Especially since we are the ones with physical limitations on how fast we can compute, how big our brains can be, and so on.

07.09.2025 16:23 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

So what?

The issue here isn't why Gemini agrees with me but that the issues we agree on are real serious issues in the paper. And it's quite revealing that an author chose to block me instead of being able to defend it or admit the mistakes.

07.09.2025 16:08 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Did you read what I just said?

07.09.2025 16:01 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I was already pretty sure back in those days that one day AI will overtake us. But it was pretty much anybody's guess then if it will happen in my lifetime. Something like 50 years was probably a common estimate then.

We are living in a very special moment now!

07.09.2025 16:00 β€” πŸ‘ 5    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

I just said I read the relevant parts and found the issue myself, and also used Gemini for getting a peer-review, answers to questions, resulting a longer conversation.

So it's not just summarization and they aren't reasoners just for that skill, but their overall success in all kinds of tasks.

07.09.2025 15:55 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Nanotechnology - Wikipedia

Possibly related:

"In the early 2000s, the field garnered increased scientific, political, and commercial attention that led to both controversy and progress."

It became a cool prefix back then?

07.09.2025 15:50 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

And yet you decided I'm the one who needs to read more, even though I'm the one who immediately identified a serious issue with the paper, which the author apparently couldn't admit, even though it was proven by documentation of the relevant LLMs.

07.09.2025 15:47 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

So you "genuinely encouraged" me to read papers, yet haven't read the paper we are talking about?

That screenshot was from the last post of a lengthy conversation with Gemini, what we agreed upon. Initial prompt, that already identified the issue with timelines was simply "Peer review this paper."

07.09.2025 15:43 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1

And surprise, surprise, even before this, the author, who used to follow me, has blocked me.

That's the best indication of true readiness to open discussion.

07.09.2025 15:22 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 1

@pekka is following 20 prominent accounts