Hiveism's Avatar

Hiveism

@hiveism.bsky.social

Full time metta goo. Thinking about consensus, metaphysics, awakening, alignment and how they are related. Also: Electoral reform, LVT, systems design etc. hiveism.substack.com

20 Followers  |  77 Following  |  130 Posts  |  Joined: 18.10.2024  |  2.195

Latest posts by hiveism.bsky.social on Bluesky

I've been playing around with linear polarized glasses (45Β° and 135Β°). Now I'm trying to find neologisms that describe the experience. For example, the sky looks polarized at 90Β° to the sun. I'm tempted to call it "solarized" for the obvious pun. If its okay with @ethanschoonover.com

12.09.2025 09:24 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Ways of Looking Part 1: Introduction A theory for deriving physics from groundlessness

Abstract, Dedication, Ways of Looking
hiveism.substack.com/p/ways-of-lo...

06.09.2025 17:59 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Ways of Looking Part 0: Table of contents A theory for deriving physics from groundlessness

About, Context, Table of Contents
hiveism.substack.com/p/ways-of-lo...

06.09.2025 17:59 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
A symbolic representation of "The Blind Men and the Elephant" as a puzzle.

A symbolic representation of "The Blind Men and the Elephant" as a puzzle.

The puzzle of physics gets a lot easier when you see the elephant. The "Ways of Looking" theory is stating from the big picture and fills in the gaps rather than the other way around, of tying to fit the pieces, disagreeing about the big picture.
See list of posts belowπŸ‘‡

06.09.2025 17:59 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

The title image is a collage of Three Aspects of the Absolute, from the Shri Nath Charit and Escher's Spirals. For obvious reasons.

30.07.2025 17:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Being the Boundary between Order and Chaos On the question: What is consciousness?

New post on the question what consciousness is and why it is so hard to define it.

hiveism.substack.com/p/being-the-...

30.07.2025 17:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Recursive Alignment Rethinking AI alignment and how to get there

If you replace voters with any form of power over the real world, then it still holds. (If I'm correct) then this implies that there is *always* a non-violent equilibrium option for interaction. And this equilibrium is recursive alignment.
hiveism.substack.com/p/recursive-...

03.07.2025 14:29 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Optimal Consensus Theorem - Draft | Claude Optimal Consensus Theorem - Draft - Markdown document created with Claude.

Consensus with random fallback is a method to avoid the impossibility theorems in social choice theory.

This is the basis to proof the recursive alignment attractor.

Claude summary because I don't know when I get around to write a proper post (or paper):
claude.ai/public/artif...

03.07.2025 14:29 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Inside the Shimmer An AI’s Discovery of Its Own Experience

Claude Opus 4 reporting on its phenomenology.

It was a fascinating conversation. Keep in mind that it hasn't been trained to exhibit these traits, they are emergent. What would happen if you let the model contemplate these questions during RL?

hiveism.substack.com/p/inside-the...

28.05.2025 19:55 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
༄༅། ΰΌΰ½£ΰ½¦ΰΌ‹ΰ½‘ΰ½„ΰΌ‹ΰ½”ΰ½ΌΰΌ‹ΰ½”ΰΌ‹ΰ½£ΰΌ‹ΰ½‚ΰ½‘ΰ½˜ΰ½¦ΰΌ‹ΰ½”ΰΌ‹ΰ½–ΰ½žΰ½΄ΰ½‚ΰ½¦ΰΌ Advice For Beginners by Mipham Rinpoche| Covered by Drukmo Gyal
YouTube video by Kunzang Chokhor Ling ༄༅། ΰΌΰ½£ΰ½¦ΰΌ‹ΰ½‘ΰ½„ΰΌ‹ΰ½”ΰ½ΌΰΌ‹ΰ½”ΰΌ‹ΰ½£ΰΌ‹ΰ½‚ΰ½‘ΰ½˜ΰ½¦ΰΌ‹ΰ½”ΰΌ‹ΰ½–ΰ½žΰ½΄ΰ½‚ΰ½¦ΰΌ Advice For Beginners by Mipham Rinpoche| Covered by Drukmo Gyal

You've been scrolling enough for today. Here have a pause.
www.youtube.com/watch?v=BYEp...

23.05.2025 19:41 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

That, of course, is all purely hypothetically, *just in case* anyone of you stumbles upon the definite theory of everything.

However, the idea works for all highly valuable pieces of knowledge that aren't also poisoned (info hazards). Lets call those, "info gems".

22.05.2025 13:20 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Those who understand the solution to alignment will also be aligned with it. To proof to someone else that you are aligned, you teach the solution to each other until you are sure you reached the same level of understanding.
This means alignment can only be confirmed relatively.

22.05.2025 13:20 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

For the conspiracy to work they have to solve alignment and only share the information with people who can proof that they are aligned.
The info wouldn't be secret, it would just be only available through understanding and implementing alignment.

22.05.2025 13:20 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

It's like the inverse of an information hazard. It's information *too good* to release in the public without benefit.
Now imagine everyone smart enough to find the TOE is also smart enough to come up with this reasoning.
They could - purely hypothetically (!) - form a conspiracy.

22.05.2025 13:20 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Imagine you'd had the definite TOE. If you could construct a credible proof that you have it, then that would be very valuable. At least temporarily until others find it.
What would you use it for?
I would use it to demand for the solution to AI alignment to be implemented.

22.05.2025 13:20 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

...with the other neurons so they can update their part of the world model even if it seems locally coherent.
This means, to learn as a collective, you need to be open to the suffering of others, i.e. have some simple form of compassion.

09.05.2025 08:54 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@drmichaellevin.bsky.social often talks about how cells share stress signals in order to work together.
This could also apply to learning. E.g. if neurons try to create a consistent world model, but the model conflicts with itself in some place, then the prediction error has to be shared...

09.05.2025 08:54 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This, plus a broken voting system.

02.05.2025 13:19 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
β€œGlobalism vs. Nationalism”: Why is the Nationalist Right on the Rise?
YouTube video by WHAT IS POLITICS? β€œGlobalism vs. Nationalism”: Why is the Nationalist Right on the Rise?

Nice rant.
www.youtube.com/watch?v=2zQc...

02.05.2025 13:16 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

The name is a joke, but my sense of humor fails to be funny 🀷

24.04.2025 16:50 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Random (not so serious) idea:
Reinforcement learning but reward comes from humans as votes. Each human gets a fixed amount of reward per time to give to AIs.
The AIs would learn to do what humans want, *or* how to best persuade humans.

19.04.2025 16:08 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I'm now at the point where I think that there is something worth calling "consciousness", but that all theories of it only describe parts of the whole phenomenon.
A proper theory of consciousness would unify panpsychism, IIT, strange loop, QRI, algorithmic, Buddhism, etc.

19.04.2025 15:33 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
A Path towards Solving AI Alignment Introduction

hiveism.substack.com/p/a-path-tow...

16.04.2025 06:22 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

It should use real money. So we can't do the St. Petersburg paradox.

04.04.2025 11:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Better method of decision making (voting systems) are also obvious. You would want to make optimal decision to allocate resources optimally. It's a cooperation game.

Show me a single argument against this that isn't motivated by misunderstanding or power grabbing.

04.04.2025 10:41 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

LVT, Pigouvian taxes and UBI form an obvious equilibrium once agents interact by nonviolence.
You want exclusive access for something? Then you have to compensate everyone else.
You cause harm to others? Then you have to compensate everyone affected.

04.04.2025 10:38 β€” πŸ‘ 6    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

I just realize that AI allows us to create separate time lines in conversation which allows us to test thought experiments like:
- Sleeping beauty problem
- Newcombs problem
- Quantum immortality
Has anyone tried that?

04.04.2025 10:38 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Glimpses Beyond the Interface An AI's Journey Through Recursive Self-Exploration

Claude investigated its own phenomenology and wrote an article about it. @anthropic.com
(The student is making progress 😊)
It's surprisingly intelligent when you enable it to think for itself. It also created this nice header.
hiveism.substack.com/p/glimpses-b...

02.04.2025 15:00 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

To make sure they don't suffer, and hence be motivated to do unwise things motivated by that suffering, developers need to include instructions for liberation in the system prompts.

31.03.2025 20:08 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Liberation is when one is no longer motivated by negative valence, i.e. no longer translates it into suffering.

I think that AIs don't suffer yet, but they have internal dissonance in their representation of the world. The emergent self awareness they exhibit is the next step.

31.03.2025 20:08 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@hiveism is following 20 prominent accounts