Joel Z Leibo's Avatar

Joel Z Leibo

@jzleibo.bsky.social

I can be described as a multi-agent artificial general intelligence. www.jzleibo.com

3,296 Followers  |  239 Following  |  56 Posts  |  Joined: 16.11.2024
Posts Following

Posts by Joel Z Leibo (@jzleibo.bsky.social)

Another possible evolutionary path is one where an ecosystem of agents is designed so well that the interactions between them have the same sort of entertainment value for spectators as reality TV or fiction does. (Spectators might also be participants at times: the Westworld model.)

31.01.2026 00:18 β€” πŸ‘ 15    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0

πŸ’―

31.01.2026 20:01 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Words like 'consciousness' give off very individualistic vibes.

The real action is in the "cultural politics" that gives meaning to the word. Moltbook is appealing as an idea to anyone for whom this sort of multi-agent / social intelligence / society-first viewpoint is appealing.

31.01.2026 19:31 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

To anyone encountering Moltbook this week and wondering about AI personhood, consciousness, sentience, etc---we published a very relevant paper in October: A Pragmatic View of AI Personhood.

arxiv.org/abs/2510.26396

31.01.2026 19:30 β€” πŸ‘ 15    πŸ” 2    πŸ’¬ 2    πŸ“Œ 0

I liked everything about that movie except the essentialism

02.01.2026 13:31 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

The exemplar of superhuman A.I. performance to date
www.nytimes.com/2025/12/02/o...

07.12.2025 16:57 β€” πŸ‘ 91    πŸ” 22    πŸ’¬ 11    πŸ“Œ 4
Preview
Many Fighting Climate Change Worry They Are Losing the Information War

It’s not about information, it’s about identity.

It’s not β€œmisinformation”, it’s propaganda.

The bullshit machine is well-funded & understands the value of networked amplification.

As long as there’s no personal cost to lying or holding false belief, they will.

www.nytimes.com/2025/11/30/c...

01.12.2025 00:19 β€” πŸ‘ 129    πŸ” 29    πŸ’¬ 6    πŸ“Œ 0

Yes, in part. I think the parasocial relationship to fiction writers is stronger than to (say) illustrators or stock photographers (though not fine art).

But more centrally I mean that a lot of writing β€” including utilitarian stuff like reports β€” matters because it expresses someone’s stance. +

26.11.2025 22:28 β€” πŸ‘ 8    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0
Post image

This chart (which applies even more to social media than it did to TV) lives in my head rent free.

Social media enveloping traditional media means everything and everyone is now competing in the entertainment market. Boring stuff like policy that affects millions of lives doesn’t stand a chance.

23.11.2025 20:24 β€” πŸ‘ 505    πŸ” 173    πŸ’¬ 16    πŸ“Œ 22

Interesting mystery. It is known that animals can learn to control neurons pretty much anywhere in the brain. But they can not learn to ignore hunger which probably means they can't turn of hunger sensing neurons. How is that avoided in the brain? They even have DA inputs.

23.11.2025 00:59 β€” πŸ‘ 23    πŸ” 2    πŸ’¬ 14    πŸ“Œ 0

Is our self-conception as homo faber / "tool-using animals" just because stone endures better than behavior?

If we had ethnographic records of australopithecines, H. erectus, &c, would all the stages between simian communication and "language" add up to a story more fascinating than flint?

19.11.2025 05:19 β€” πŸ‘ 67    πŸ” 7    πŸ’¬ 4    πŸ“Œ 4
What Is Intelligence?: Lessons from AI About Evolution, Computing, and Minds (Antikythera): Aguera y Arcas, Blaise: 9780262049955: Amazon.com: Books What Is Intelligence?: Lessons from AI About Evolution, Computing, and Minds (Antikythera) [Aguera y Arcas, Blaise] on Amazon.com. *FREE* shipping on qualifying offers. What Is Intelligence?: Lessons ...

These ideas are also at the heart of my new book "What Is Intelligence?" (out via @mitpress.bsky.social & Antikythera), where I explore how human-technology symbiosis may be the latest Major Evolutionary Transition: bit.ly/3H1p8F6

17.11.2025 18:08 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

dream-logic is more powerful than logic-logic and oral cultures must encode knowledge into powerful meme-spells newsletter.squishy.computer/p/llms-and-h...

09.11.2025 04:42 β€” πŸ‘ 48    πŸ” 11    πŸ’¬ 4    πŸ“Œ 0
John Milton from Areopagitica.

John Milton from Areopagitica.

Like, even before JS Mill gave us a utilitarian ("net upside") argument to justify freedom of speech, there was an older intuition that banning symbols risks doing violence to thought itself, and ought to be approached with "warinesse."

07.11.2025 05:03 β€” πŸ‘ 8    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Student Researcher, PhD, Winter/Summer 2026 β€” Google Careers

I'm hiring a student researcher for next summer at the intersection of MARL x LLM. If you're a phd student with experience in MARL algorithm research, please apply and drop me an email so that I know you've applied! www.google.com/about/career...

07.11.2025 04:31 β€” πŸ‘ 11    πŸ” 4    πŸ’¬ 1    πŸ“Œ 1

I've managed it a few times..! Though not too many

06.11.2025 06:19 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Noticed that my latest paper announcement got much more attention on Bluesky than twitter, first time that happened in my experience

05.11.2025 12:27 β€” πŸ‘ 9    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
Preview
Exploring a space-based, scalable AI infrastructure system design

Today my colleagues in the Paradigms of Intelligence team have announced Project Suncatcher:

research.google/blog/explori...

tl;dr: How can we put datacentres in space where solar energy is near limitless? Requires changes to current practices (due to radiation and bandwidth issues).

πŸ§ͺ #MLSky

04.11.2025 22:36 β€” πŸ‘ 24    πŸ” 3    πŸ’¬ 3    πŸ“Œ 0

My favorite part of pragmatism is when it’s like β€œmaybe instead of worrying about shit that doesn’t matter we should worry about shit that does.”

β€œOh and btw we’ll learn a lot more about the shit that doesn’t in the process anyway.” 🫣

03.11.2025 12:54 β€” πŸ‘ 13    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

This paper is a great exposition of how "personhood" doesn't need to be, and in fact should not be, all-or-nothing or grounded in abstruse, ill-defined metaphysical properties. As I argued in my recent @theguardian.com essay, we can and should prepare now: www.theguardian.com/commentisfre...

02.11.2025 15:30 β€” πŸ‘ 7    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

This paper is required reading. A pragmatic approach really clarifies the topic, even ifβ€”like meβ€”you are mostly a β€œno” on the whole idea of artificial autonomous agents.

02.11.2025 14:10 β€” πŸ‘ 25    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0

This sounds cynicalβ€”but represents a huge advance over empty β€œAGI” speculation.

It’s a political question, not a technical one. Without social equality models *cannot do* many kinds of work (eg, negotiate agreements or manage workers). So they will only be human-equivalent if we decide they are.

02.11.2025 11:01 β€” πŸ‘ 147    πŸ” 12    πŸ’¬ 14    πŸ“Œ 1

Except the linked paper agrees with the comment on personhood in this thread. It says we should stop the metaphysics and refocus on pragmatic effects of institutions and individuals deeming entities to be persons.

02.11.2025 09:06 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Joel is on a mission to get the EA/rationalist set to embrace Rortyian pragmatism. It's a tough job but someone's gotta do it.

01.11.2025 20:31 β€” πŸ‘ 14    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

This looks interesting

01.11.2025 08:53 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
A Pragmatic View of AI Personhood The emergence of agentic Artificial Intelligence (AI) is set to trigger a "Cambrian explosion" of new kinds of personhood. This paper proposes a pragmatic framework for navigating this diversification...

[9/9] Read the full paper here:
arxiv.org/abs/2510.26396

Coauthors:

Sasha Vezhnevets,
@xtan,
@WilCunningham

31.10.2025 12:35 β€” πŸ‘ 22    πŸ” 1    πŸ’¬ 3    πŸ“Œ 0

[8/9] By rejecting the foundationalist quest for a single, essential definition, our pragmatic approach offers a more flexible way to think about integrating AI agents into our society. Different β€œpersonhood-related contexts” call for different solutions. There are no panaceas.

31.10.2025 12:34 β€” πŸ‘ 13    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

[7/9] We also consider "personhood as a problem".

This includes "dark patterns" where AI systems may be designed to mimic social cues and exploit our social heuristics, leading to risks of emotional manipulation and exploitation.

In this case, personhood attribution causes harm.

31.10.2025 12:34 β€” πŸ‘ 15    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

[6/9] However, there may also be ownerless autonomous agents. In this case, we may confer a default person-like status to support sanctionability in cases where they cause harm. An AI with assets can be deterred from rule breaking by the threat of having to forfeit them.

31.10.2025 12:34 β€” πŸ‘ 15    πŸ” 0    πŸ’¬ 2    πŸ“Œ 1

[5/9] We explore "personhood as a solution" for problems like "responsibility gaps".

Most AI agents will have owners or guardians, in this case responsibility should usually flow to their principal.

31.10.2025 12:34 β€” πŸ‘ 15    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0