Peter N. Salib's Avatar

Peter N. Salib

@petersalib.bsky.social

Assistant Professor, University of Houston Law Center AI, Risk, Constitution, Economics

1,268 Followers  |  141 Following  |  31 Posts  |  Joined: 15.07.2023  |  1.979

Latest posts by petersalib.bsky.social on Bluesky

AI Rights for Human Flourishing <div> <br> </div>AI companies are racing to create Artificial General Intelligence (AGI): AI systems that outperform humans at most economically valuable work.

papers.ssrn.com/sol3/papers....

29.07.2025 16:12 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
AI Rights for Human Flourishing <div> <br> </div>AI companies are racing to create Artificial General Intelligence (AGI): AI systems that outperform humans at most economically valuable work.

the legal system should organize AGI labor is of great importance.

Our proposal: Do what has always worked before. Let all workers, human and AI, own their labor, make contracts to sell it, and keep the proceeds.

Not for the sake of AIs, but for the sake of global human flourishing.

29.07.2025 16:12 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1

been feudal lords, encomenderos, slaveholders, and so on. In the AGI economy, the elite owners will be AI companies and their investors.

If, as many believe, the advent of AGI--AIs that can do most jobs humans can--*could* deliver rapid economic progress and material abundance, the question of how

29.07.2025 16:09 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

disastrous for almost everyone living under them. A wealth of economic evidence shows that they substantially slow growth, impoverishing ordinary workers, whether free or unfree.

Unfree labor systems benefit only the elite class who own substantial numbers of laborers. Historically, those have

29.07.2025 16:06 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

To be clear, our argument is not that a labor system based on the ownership of (AI) laborers will be the *moral* equivalent of systems based on the ownership of humans!

Rather, we argue that the systems will have similar economic effects. In short, systems of unfree labor are economically

29.07.2025 16:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Enjoyed the recent @80000hours.bsky.social w/ @tobyord.bsky.social. Agree that AI policy researchers should dream bigger on societal Qs. Simon Goldstein and I have been working on one of Toby's big questions: Should the AGI economy be run like a slave society (as it will under default law)?

29.07.2025 16:02 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5369439

caught up to the frontier.

If the joint lab couldn't clear the bottleneck, we think that it would also serve as a credible scientific authority to both the US and China around which a more coordinated global pause could be built.

Much more in the full draft: papers.ssrn.com/sol3/papers....

29.07.2025 15:57 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

others, and it hit a new level of capabilities (and misalignment) where advanced rogue systems became a serious threat, *it* could pause capabilities progress and go all-in on clearing the alignment bottleneck. The frontier lab would have 1 year to do so before others

29.07.2025 15:57 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

capabilities parity (and thus deterrence) all the way up the AI capabilities ladder.

2) for AI safety, the joint lab would, essentially automatically, function as a global "pause" button on frontier capabilities advancement. If the joint lab was, e.g., 1 year ahead of all

29.07.2025 15:57 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

the most compute, hire the best researchers, and (we think) have an excellent change of becoming the leading AI lab in the world. This would have two effects:

1) On geostrategy, this lab would diffuse the most advanced AI systems to to the US and China simultaneously, ensuring

29.07.2025 15:56 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

How to operationalize this while also reducing catatrophic/exisential risk from AI? Our proposal:

The US and China should make an agrement to jointly found a frontier AI lab. Backed by the sovereign wealth and power of the two most powerful countries on earth, that lab could buy

29.07.2025 15:56 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

But the same AIs needed for advanced military application will also likely be excellent at improving healthcare, ed, research, and much more.

Here, there is no guns/butter tradeoff. The guns *are* the butter.

Thus, game theory favors equilibria of *high* capabilities.

29.07.2025 15:56 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

In nuclear competition, equilibria of *low* capabilities (e.g., 6K warheads per side, rather than 60K) are attractive b/c of the guns/butter tradeoff. Nukes are expensive, and they have few positive spillovers to the rest of the economy. They don't, e.g., improve healthcare.

29.07.2025 15:56 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

One thing from nuclear game theory that *does* apply to AI is the idea that what matters most is rough parity of capabilities (for second-strike deterrence), rather than the total number of warheads (or total AI capability)

But there are many possible equilibria of parity.

29.07.2025 15:55 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Most critics of an AI arms race advocate international coordination to *slow* AI progress. They rely on analogies to Cold War nonproliferation and disarmament agreements.

We argue that there are important differences between AI and nukes that make such strategies hard.

29.07.2025 15:55 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

The WH's AI Action plan has some good stuff. But it begins, "The US is in a race to achieve global dominance in AI."

Like many, @simondgoldstein
and I think that an AI arms race w/ China is a mistake.

Our new paper lays out a novel game-theoretic approach to avoiding the race.

29.07.2025 15:55 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

RAISE Act are a extremely reasonable first-steps towards mitigating that risk. I would, of course, favor a single, well-designed federal regime over a patchwork of state regs. But if the feds want to do that, they can. The ban was no substitute for actually doing something.

01.07.2025 18:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I'm on balance relieved that the federal ban on state-level AI regulation is dead. I do expect many state laws to be dumb, and tech-illiterate. But government also needs to take seriously the warnings that advanced AI systems could kill large numbers of people. Bills like NY's...

01.07.2025 18:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
In Lawsuit Over Teen's Death, Judge Rejects Arguments That AI Chatbots Have Free Speech Rights The judge's order sends a message that Silicon Valley β€œneeds to stop and think and impose guardrails before it launches products to market," said attorney Meetali Jain of the Tech Justice Law Project.

This First Amendment ruling is correct: As I argue in @WashULRev, the outputs of generative AI systems like LLMs are not protected speech. Not of the AI company. Not of the user. Read more here! papers.ssrn.com/sol3/papers....

www.law.com/therecorder/...

23.05.2025 16:31 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Very important point raised by @petersalib.bsky.social and Simon Goldstein regarding AI risk and alignment:

www.ai-frontiers.org/articles/tod...

21.05.2025 16:33 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Preview
Tahra Hoops on X: "New Pope is abundance-pilled https://t.co/jnSFxcmNR3" / X New Pope is abundance-pilled https://t.co/jnSFxcmNR3

Which US Constitutional or Canon laws, if any, forbid someone from being simultaneously Pope and the US President?

Asking for a friend.

x.com/TahraHoops/s...

08.05.2025 18:02 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

AGI is, I think, the most important thing that could happen in the next 4 years. Yes, even more than the other insane stuff. I wish more legal thinkers were engaged seriously with the prospect of world-shattering AI. Law can’t fix all of the problems alone. But it can help.

05.03.2025 02:01 β€” πŸ‘ 7    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
AI Rights for Human Safety <div> AI companies are racing to create artificial general intelligence, or β€œAGI.” If they succeed, the result will be human-level AI systems that can independ

papers.ssrn.com/sol3/papers....

04.03.2025 17:11 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Post image

Pleased to share that my (and Simon Goldstein's) newest article, "AI Rights for Human Safety," is forthcoming in the Virginia Law Review.

04.03.2025 17:09 β€” πŸ‘ 15    πŸ” 4    πŸ’¬ 1    πŸ“Œ 1
Post image Post image

When authors of the AGI-denialist "stochastic parrots" paper publish "Fully Autonomous AI Agents Should Not Be Developed," you should start to worry that AGI really is imminent.

When their main argument is that AGI will kill people, you should worry more.

07.02.2025 18:01 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 0    πŸ“Œ 1

The downside of this strategy is that, if you point a shotgun at someone's head, you give them even more reason to murder you than they would otherwise have had (conditional on being able to pull it off).

But yes, everyone should read Gibson!

09.01.2025 22:21 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Don't worry about it. Lawyers in general are typography pedants. So I appreciated it!

09.01.2025 22:18 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Ha! Great catch. I'll see if we can get it fixed. Thanks for reading.

09.01.2025 19:27 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Or an even more overt scenario: AI promises a huge monetary reward for the OpenAI employee that helps it escape. Today, we already worry about foreign governments offering the same to AI lab employees for stealing the weights. A highly capable AI could credibly do the same.

09.01.2025 19:26 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Rogue AI Moves Three Steps Closer OpenAI’s new o3 model suggests that it will not be long before AI systems are as smart as their human mindersβ€”or smarter.

In light of OpenAI’s new o3 model, @petersalib.bsky.social writes that "rogue AI is a concern worth taking seriouslyβ€”and taking seriously now. This is a problem for which, by its very nature, solutions cannot wait until there is conclusive proof of their need."

09.01.2025 18:23 β€” πŸ‘ 45    πŸ” 14    πŸ’¬ 4    πŸ“Œ 6

@petersalib is following 20 prominent accounts