's Avatar

@platypii.bsky.social

14 Followers  |  38 Following  |  107 Posts  |  Joined: 23.08.2024  |  2.1085

Latest posts by platypii.bsky.social on Bluesky

it's clear you haven't used them if you think that is the level of response quality

23.11.2025 20:23 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

should we believe bender 2020 or bender 2025? one of them is lying to promote her book

01.11.2025 17:26 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

That's weird, just a few years ago you said you parroted it from someone else on twitter

01.11.2025 17:14 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Red hot πŸ”₯ take on how to architect large data applications entirely in JS.

Kenny Daniel covers how he built Hyparquet & HighTable for loading Parquet files in the browser.

www.youtube.com/watch?v=J06r...

Subscribe to get notified when we ship more videos from #CascadiaJS 2025! πŸ“Ί

27.10.2025 16:39 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

If you think LLMs don't reason, then either you haven't used them, or your definition of reasoning is so specific to humans that it is useless as a definition.

Just proclaiming that "form is not meaning!!" doesn't make it true.

21.10.2025 21:27 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

wait are you claiming you personally created AI? lolol

18.10.2025 03:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Sorry that people have access to unlimited education and personal tutoring, even in the most remote locations, for free thanks to chatgpt. Good luck on your quest to stop it ✊

18.10.2025 03:13 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 1
Post image

"killed their son" oh please. have you actually read the claims? Chat gpt consistently tried to help the kid. The kid LIED about his motivations in order to get the answer he wanted. From reading the transcripts, chatgpt handled a difficult situation better than 99% of humans would.

15.10.2025 23:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

It literally does interact with the world!

If you can't admit that text, images, toolcalls, RL with verified rewards, and RLHF are "interacting with the world" then you are just redefining words to suit your whims.

I'm done with you... we'll see in the next years who's right. 🀷 Good luck.

10.10.2025 21:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

"the hardware would break pretty fast" this is nonsense. what are you even suggesting?

10.10.2025 21:26 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Is it an open system or not?

Your initial goalpost: LLMs can't be conscious because they are not in an open system.

I pointed out they do get context from interacting with the world.

New goalpost: fine it's an open system but its not the PHYSICAL world.

You are not a serious person. πŸ™„

10.10.2025 21:25 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Absurd claim. A non-linear system with trillion parameters, dozens of transformer layers, and dozens of attention heads interacting is not complex?

You know an LLM can simulate a video game right? So an LLM is MORE complex than a video game AI.

10.10.2025 21:18 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Moving the goal posts again, eh?

The model gets context from the user and from verified rewards and from simulation environments. That sounds a lot like an organism interacting with the physical world.

10.10.2025 20:51 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

But LLMs are NOT a closed system. And you admitted it: reinforcement learning (part of training) interacts with the world (simulate environments and human feedback).

And ALSO at inference it's directly interacting with the world via user and tools.

So how is that not an "open system" again??

10.10.2025 20:48 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Training includes post-training stages. Everything that happens in determining the weights is "training" phase. Including SFT and RL.

And ALSO there is interaction with the world at inference time. Models use tools in a loop that provide them context about the world.

09.10.2025 18:40 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Ok, you said initial training, fine. But you're the one moving the goalposts. Our entire argument was about whether LLMs could be conscious. You argued that there was no feedback loop with the world. I pointed out that the post training phase does. So how do you reconcile that with your claims?

09.10.2025 18:38 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

False. Reinforcement Learning is a critical part of modern post-training for LLMs, and involves a feedback loop between the model and the world. Try again.

09.10.2025 17:45 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I'll assume that you are arguing for a non-reductionist view of consciousness... that it depends on the interaction of the brain and the environment.

Well guess what? LLMs interact with the environment too. Differently than humans, but interaction nonetheless. Both during training and inference.

09.10.2025 17:14 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

1 trillion artificial neurons connected together non-linearly by trillions of weights is not complex to you? What??

09.10.2025 16:58 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

... where else would it emerge from?

09.10.2025 16:55 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

Oh, so your definition of consciousness requires that its running on a squishy brain?

I mean, that's a pretty useless definition. Sure, LLMs run on silicon not biology. That proves nothing about what they are capable of. Can an LLM think and solve problems like us? Obviously it can.

09.10.2025 16:55 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Neurons are neither deep nor complex, why would you believe consciousness can emerge in the brain?

Consciousness is an emergent property of a complex system made up from simple parts.

If you think LLMs are inherently more limited than brains you need to argue WHY.

09.10.2025 16:44 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I majored in math, use any formal logic system you like. First order, higher order, constructivist.

I would LOVE to see your argument for why "LLMs cannot be conscious" follows from Bayes Therorem. Make sure to give your formal definition of consciousness.

09.10.2025 16:28 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

> "This is mathematically impossible and therefor logical disproven."

What is "this" that is "mathematically impossible"?

There is no proof that LLMs cannot be conscious because there is no mathematical definition of consciousness.

09.10.2025 16:14 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We don't have a criteria for consciousness, so now I'm certain you don't know what you're talking about.

But please provide your argument formally, I would be fascinated. you might even get a Nobel prize.

09.10.2025 16:08 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

You have no logical argument so you just say "go read". You're both condescending and wrong.

Of course there are differences. But you have failed to explain why you think these are RELEVANT differences.

Why does it matter that we can calculate LLMs? We can simulate neurons too.

09.10.2025 16:01 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

"However, one thing that is definitively not sentient, and never will be, is software... this is precisely because they do not think."

WHY do you think machines are incapable of sentience (or thinking)? There is no actual argument in the article. You just assert it with no evidence?

09.10.2025 00:11 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

What's your goal here? Are you trying convince people you are correct? I can't imagine this an effective strategy for making your side seem like the one grounded in reality. Maybe you're just scared of all the changes in the world? It's fine if you just need to vent.

08.10.2025 22:24 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

You might be the least effective bluesky poster I've ever seen

08.10.2025 22:08 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

If you say so it must be true

07.10.2025 23:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@platypii is following 20 prominent accounts