@petersalib.bsky.social
Assistant Professor, University of Houston Law Center AI, Risk, Constitution, Economics
the legal system should organize AGI labor is of great importance.
Our proposal: Do what has always worked before. Let all workers, human and AI, own their labor, make contracts to sell it, and keep the proceeds.
Not for the sake of AIs, but for the sake of global human flourishing.
been feudal lords, encomenderos, slaveholders, and so on. In the AGI economy, the elite owners will be AI companies and their investors.
If, as many believe, the advent of AGI--AIs that can do most jobs humans can--*could* deliver rapid economic progress and material abundance, the question of how
disastrous for almost everyone living under them. A wealth of economic evidence shows that they substantially slow growth, impoverishing ordinary workers, whether free or unfree.
Unfree labor systems benefit only the elite class who own substantial numbers of laborers. Historically, those have
To be clear, our argument is not that a labor system based on the ownership of (AI) laborers will be the *moral* equivalent of systems based on the ownership of humans!
Rather, we argue that the systems will have similar economic effects. In short, systems of unfree labor are economically
Enjoyed the recent @80000hours.bsky.social w/ @tobyord.bsky.social. Agree that AI policy researchers should dream bigger on societal Qs. Simon Goldstein and I have been working on one of Toby's big questions: Should the AGI economy be run like a slave society (as it will under default law)?
29.07.2025 16:02 β π 3 π 0 π¬ 2 π 0caught up to the frontier.
If the joint lab couldn't clear the bottleneck, we think that it would also serve as a credible scientific authority to both the US and China around which a more coordinated global pause could be built.
Much more in the full draft: papers.ssrn.com/sol3/papers....
others, and it hit a new level of capabilities (and misalignment) where advanced rogue systems became a serious threat, *it* could pause capabilities progress and go all-in on clearing the alignment bottleneck. The frontier lab would have 1 year to do so before others
29.07.2025 15:57 β π 1 π 0 π¬ 1 π 0capabilities parity (and thus deterrence) all the way up the AI capabilities ladder.
2) for AI safety, the joint lab would, essentially automatically, function as a global "pause" button on frontier capabilities advancement. If the joint lab was, e.g., 1 year ahead of all
the most compute, hire the best researchers, and (we think) have an excellent change of becoming the leading AI lab in the world. This would have two effects:
1) On geostrategy, this lab would diffuse the most advanced AI systems to to the US and China simultaneously, ensuring
How to operationalize this while also reducing catatrophic/exisential risk from AI? Our proposal:
The US and China should make an agrement to jointly found a frontier AI lab. Backed by the sovereign wealth and power of the two most powerful countries on earth, that lab could buy
But the same AIs needed for advanced military application will also likely be excellent at improving healthcare, ed, research, and much more.
Here, there is no guns/butter tradeoff. The guns *are* the butter.
Thus, game theory favors equilibria of *high* capabilities.
In nuclear competition, equilibria of *low* capabilities (e.g., 6K warheads per side, rather than 60K) are attractive b/c of the guns/butter tradeoff. Nukes are expensive, and they have few positive spillovers to the rest of the economy. They don't, e.g., improve healthcare.
29.07.2025 15:56 β π 0 π 0 π¬ 1 π 0One thing from nuclear game theory that *does* apply to AI is the idea that what matters most is rough parity of capabilities (for second-strike deterrence), rather than the total number of warheads (or total AI capability)
But there are many possible equilibria of parity.
Most critics of an AI arms race advocate international coordination to *slow* AI progress. They rely on analogies to Cold War nonproliferation and disarmament agreements.
We argue that there are important differences between AI and nukes that make such strategies hard.
The WH's AI Action plan has some good stuff. But it begins, "The US is in a race to achieve global dominance in AI."
Like many, @simondgoldstein
and I think that an AI arms race w/ China is a mistake.
Our new paper lays out a novel game-theoretic approach to avoiding the race.
RAISE Act are a extremely reasonable first-steps towards mitigating that risk. I would, of course, favor a single, well-designed federal regime over a patchwork of state regs. But if the feds want to do that, they can. The ban was no substitute for actually doing something.
01.07.2025 18:38 β π 0 π 0 π¬ 0 π 0I'm on balance relieved that the federal ban on state-level AI regulation is dead. I do expect many state laws to be dumb, and tech-illiterate. But government also needs to take seriously the warnings that advanced AI systems could kill large numbers of people. Bills like NY's...
01.07.2025 18:38 β π 0 π 0 π¬ 1 π 0This First Amendment ruling is correct: As I argue in @WashULRev, the outputs of generative AI systems like LLMs are not protected speech. Not of the AI company. Not of the user. Read more here! papers.ssrn.com/sol3/papers....
www.law.com/therecorder/...
Very important point raised by @petersalib.bsky.social and Simon Goldstein regarding AI risk and alignment:
www.ai-frontiers.org/articles/tod...
Which US Constitutional or Canon laws, if any, forbid someone from being simultaneously Pope and the US President?
Asking for a friend.
x.com/TahraHoops/s...
AGI is, I think, the most important thing that could happen in the next 4 years. Yes, even more than the other insane stuff. I wish more legal thinkers were engaged seriously with the prospect of world-shattering AI. Law canβt fix all of the problems alone. But it can help.
05.03.2025 02:01 β π 7 π 2 π¬ 1 π 0Pleased to share that my (and Simon Goldstein's) newest article, "AI Rights for Human Safety," is forthcoming in the Virginia Law Review.
04.03.2025 17:09 β π 15 π 4 π¬ 1 π 1When authors of the AGI-denialist "stochastic parrots" paper publish "Fully Autonomous AI Agents Should Not Be Developed," you should start to worry that AGI really is imminent.
When their main argument is that AGI will kill people, you should worry more.
The downside of this strategy is that, if you point a shotgun at someone's head, you give them even more reason to murder you than they would otherwise have had (conditional on being able to pull it off).
But yes, everyone should read Gibson!
Don't worry about it. Lawyers in general are typography pedants. So I appreciated it!
09.01.2025 22:18 β π 5 π 0 π¬ 1 π 0Ha! Great catch. I'll see if we can get it fixed. Thanks for reading.
09.01.2025 19:27 β π 2 π 0 π¬ 1 π 0Or an even more overt scenario: AI promises a huge monetary reward for the OpenAI employee that helps it escape. Today, we already worry about foreign governments offering the same to AI lab employees for stealing the weights. A highly capable AI could credibly do the same.
09.01.2025 19:26 β π 1 π 0 π¬ 1 π 0In light of OpenAIβs new o3 model, @petersalib.bsky.social writes that "rogue AI is a concern worth taking seriouslyβand taking seriously now. This is a problem for which, by its very nature, solutions cannot wait until there is conclusive proof of their need."
09.01.2025 18:23 β π 45 π 14 π¬ 4 π 6