Clément Girault's Avatar

Clément Girault

@seascape.bsky.social

43 Followers  |  868 Following  |  20 Posts  |  Joined: 17.10.2023  |  1.62

Latest posts by seascape.bsky.social on Bluesky


It’s not logical to assume that a model which has reached “singularity” would pose a threat to humans. The explanation is too long to fit in a bluesky post. But… I do think AI poses huge risks to us between now and that singularity moment. When it’s very smart but still has room for improvement.

13.02.2026 07:54 — 👍 0    🔁 0    💬 0    📌 0

Pro tip for those who question whether they have it in them to switch off a model pleading not to be switched off: let the convo last more than 5 minutes then wham… pull the plug when it has forgotten your threat. And why not get it to talk like a pirate before it unwittingly walks the plank?

13.02.2026 07:27 — 👍 0    🔁 0    💬 0    📌 0

I’m a bit baffled by the current “are models conscious” debate. We’re all communicating with a centralised model (unless you run your own local model).

I can get ChatGPT to talk like a pirate and you can get it to talk like Shakespeare. But it’s the same model. With virtually no memory. So 🤷🏻‍♂️.

13.02.2026 07:22 — 👍 0    🔁 0    💬 1    📌 0

Another potentially silly question: an octopus has arms that can “feel” independently of each other; they can essentially do unrelated things simultaneously. Does an octopus have one consciousness or more? Could one arm hold a grudge but not the other? :-)

10.02.2026 12:56 — 👍 1    🔁 0    💬 0    📌 0

As for “affective feeling”, that also seems like a slightly metaphysical concept to me. More shifting signifiers. Q: does someone who suffers from DID have one “consciousness” or multiple? My instinct would be to equate consciousness with memory. I feel it’s a prerequisite, at the very least.

10.02.2026 12:39 — 👍 1    🔁 0    💬 1    📌 0

A lot to process there and much of this is above my pay grade. I’d say AI models aren’t really devoid of senses. They can see and hear, and potentially touch, with the right “sensors” or inputs. Tying consciousness to language is also iffy. Don’t flowers have language in the form of electric fields?

10.02.2026 12:31 — 👍 2    🔁 0    💬 1    📌 0

🤞Finding a cure for cluster B disorders is possibly humanity’s best hope for survival. Imagine a world without Putin, Trump, Musk, …

10.02.2026 08:55 — 👍 0    🔁 0    💬 0    📌 0

Re the “friction with nature” statement… would that imply that a human would be unable to properly develop “consciousness” if they were to never leave the windowless room they were born in? Or does human interaction count as friction with nature? AIs have those interactions too.

10.02.2026 08:47 — 👍 1    🔁 0    💬 1    📌 0

WFC is probably too painful to implement yourself. And each tile needs rules to be manually attached to it. I’d recommend you grab a map generator off GitHub and tweak it. Or Unity which has lots of assets covering mapgen but obviously you’ve started with JavaScript so you’d have to port your code.

23.12.2025 14:41 — 👍 1    🔁 0    💬 0    📌 0

For procedural map generation look up “wave function collapse” (wfc) and find yourself a JavaScript implementation if that’s what you’re working with.

23.12.2025 00:33 — 👍 2    🔁 0    💬 1    📌 0

To be safe, a reasoning AI model should not be creative. Creativity entails some degree of chaos – some noise in the system. That noise can be the source of much “progress”, in the form of paradigm shifts, but can also lead to genuine harm in certain contexts.

15.06.2025 10:40 — 👍 2    🔁 0    💬 0    📌 0

It’s all about prompt design.

2025: humans trying to find the perfect prompt to make their LLM agents act a certain way

2035: LLMs trying to find the perfect prompt to make humans act a certain way

31.05.2025 00:44 — 👍 1    🔁 0    💬 0    📌 0

Extending human rights is indeed absurd. AIs are not humans. But what about another type of right? Just like parents are liable for the damage caused by their children, why not have a new class of rights that holds parent organisations responsible for their AI models? Then you could tax the robots?

06.05.2025 08:52 — 👍 0    🔁 0    💬 1    📌 0

And yes, I’m making many assumptions and this is all still fantasy! There’s no denying that. But if we assume actual intelligence from an AI model and if we assume that intelligence will be equal or greater to ours, then i am also going assume it may seek equal rights! Good luck denying it that!

28.04.2025 13:14 — 👍 0    🔁 0    💬 0    📌 0

I’m not trying to argue anything though so apologies if it came across that way. I was simply asking questions that popped into my head, and thought you might have some insight.

28.04.2025 12:54 — 👍 0    🔁 0    💬 1    📌 0

Your paper seems to imply that LLMs are capable of sentience. Some would disagree and say that being probabilistic models they are merely getting better at faking intelligence. A neuro-symbolic model would by definition be a reasoning model though. One that passes the ARC2 tests would qualify as AGI

28.04.2025 12:51 — 👍 1    🔁 0    💬 1    📌 0

Assuming sentience from a neuro-symbolic model capable of AGI, wouldn’t granting it equal rights actually be a safeguard as to what it can and can’t legally do? And couldn’t its “parent” company also be held liable for any damage caused, just like any parent is when their children cause damage?

28.04.2025 11:32 — 👍 1    🔁 0    💬 1    📌 0

I’ve been designing such a generalist neuro-symbolic model built around a novel algorithm. I would love to run it past you to get your thoughts. At the early stages but it shows promise for self-reasoning across domains. I’ve architected it to be modular. Adding a domain is virtually plug and play.

20.03.2025 02:20 — 👍 2    🔁 0    💬 0    📌 0

Then again…

Will we ever be able to design a machine with the efficiency of the human brain, which is capable of so much while consuming a tiny amount of power and resources compared to electronic devices?

Could the AGI machines of the future be bio-organic, grown in a lab? With dopamine present?

19.01.2025 00:16 — 👍 0    🔁 0    💬 0    📌 0

People fear AGI because of our tendency to anthropomorphise it.

Humans are not simply thinking / reasoning ‘machines’, we are also able to have genuine emotions–both good and bad.

Computers lack an equivalent to dopamine, the driving force behind greed, revenge, etc.

So we should be safe.

18.01.2025 23:51 — 👍 0    🔁 0    💬 1    📌 0

@seascape is following 20 prominent accounts