Gillian Hadfield

Gillian Hadfield

@ghadfield.bsky.social

Economist and legal scholar turned AI researcher focused on AI alignment and governance. Prof of government and policy and computer science at Johns Hopkins where I run the Normativity Lab. Recruiting CS postdocs and PhD students. gillianhadfield.org

1,227 Followers 1,147 Following 77 Posts Joined Nov 2024
3 days ago
Gillian K. Hadfield

You can get the paper at my website! gillianhadfield.org

1 0 0 5
5 days ago

Governments can’t translate “fair” or “safe” into technical specs fast enough. But leaving details to industry means the public loses its say. Regulatory markets close both gaps: governments set outcomes, private regulators compete to achieve them.

0 0 0 0
5 days ago
Preview
Regulatory Markets: The Future of AI Governance Regulatory markets can bridge technical and democratic gaps in AI governance by pairing public oversight with private, licensed regulatory innovation.

AI systems are quickly becoming embedded throughout the economy. But we have almost none of the regulatory tools, regulatory markets among them, to manage them. Here's what I think we should do about it: www.americanbar.org/groups/scien...

1 0 2 0
1 week ago

“The most practical governance framework currently in circulation.” That’s Forbes on the Independent Verification Organization model Fathom and I have been developing. Legislation takes years; IVOs move at the pace of innovation.

0 0 1 0
1 week ago
Post image

"Why not work on what kind of new governance is needed to ensure secure, reliable, predictable use of all frontier models, from all companies?"

0 0 0 0
1 week ago
Preview
FAR.AI: Frontier Alignment Research FAR.AI is an AI safety research non-profit facilitating technical breakthroughs and fostering global collaboration.

In London today and tomorrow for the Alignment Workshop organized by FAR.AI. Keynoting alongside Rohin Shah and Allan Dafoe. I look forward to seeing everyone in attendance! www.far.ai/events/event...

2 0 0 0
1 week ago
Preview
International AI Safety Report The International AI Safety Report is the world's first comprehensive review of the latest science on the capabilities and risks of general-purpose AI systems. The work was overseen by an…

The 2026 AI Safety Report's biggest finding isn't the risks it catalogs. It's the evidence gap. We're trying to build AI governance with almost no science underneath. Massive investment in the research regulatory systems depend on is overdue. internationalaisafetyreport.org

1 0 0 0
1 week ago
Screenshot of a LinkedIn post by Jack Shanahan (Retired USAF; Project Maven/DoD JAIC; NCSI MIS; SCSP Defense Partnership), posted 3 hours ago. In the post, Shanahan weighs in on the Anthropic-Pentagon dispute, noting that despite his Project Maven background, he's sympathetic to Anthropic's position. He argues no current LLM should be used in fully lethal autonomous weapons systems, calling that a reasonable red line, and opposes mass surveillance of US citizens as a second red line. He criticizes the public nature of the dispute, calls the supply chain risk designation "laughable," questions invoking the DPA against the company's will, and advocates for shared government-industry-academia governance of frontier AI models.

Why not work on new governance...

0 0 0 0
2 weeks ago
Preview
Announcing the "AI Agent Standards Initiative" for Interoperable and Secure Innovation The Initiative will ensure that the next generation of AI is widely adopted with confidence, can function securely on behalf of its users, and can interoperate smoothly across the digital ecosystem.

NIST just launched an AI Agent Standards Initiative for identity, security, and interoperability. AI agents are becoming economic actors with zero legal infrastructure in place. We require businesses to register to operate. Why expect less of AI agents? buff.ly/kTU2cfX

1 3 0 0
2 weeks ago
Preview
IASEAI - International Association for Safe and Ethical AI Building a global movement for safe and ethical AI. Join IASEAI to ensure AI systems operate safely and ethically, benefiting all of humanity.

In Paris this week for IASEAI (Feb 24-26). Tuesday: panel on the International AI Safety Report. Thursday: keynote on regulatory markets, a panel on AI assurance, and a talk in Seth Lazar’s workshop on normative competence. If you’re at IASEAI, come say hello!

4 3 0 0
2 weeks ago
Preview
Panel Members | Independent International Scientific Panel on AI The 40 members of the Independent International Scientific Panel on AI include people from all five of the UN’s regions. They are from various different backgrounds, including academia, private…

Congratulations to Yoshua Bengio and the 39 other experts appointed to the UN’s first Independent International Scientific Panel on AI. 117-2 in the General Assembly.

0 0 0 0
2 weeks ago
AI Won’t Automatically Make Legal Services Cheaper - Curl, Kapoor & Narayanan

Better technology doesn’t fix broken institutions. The paper discusses regulatory markets as one path forward: instead of regulating providers directly, create a market for regulation itself. Worth a careful read. buff.ly/kbfvYqN

2 0 0 0
2 weeks ago
Post image

New in Lawfare from Justin Curl, Sayash Kapoor, & Arvind Narayanan: AI won’t automatically make legal services cheaper. I’ve been working on this for a long time, legal markets are broken because of adversarial dynamics, credence goods problems, & regulations that protect incumbents, not consumers.

3 0 1 0
3 weeks ago
Preview
Live from Ashby: Adaptive AI Governance with Gillian Hadfield and Andrew Freedman Podcast Episode · Scaling Laws · 02/17/2026 · 55m

Billions going into building AI, barely any into making sure it works for us. Talked with @kevintfrazier.bsky.social & Andrew Freedman about our proposal making its way through state legislatures to build a competitive market for AI oversight. New @scalinglaws.bsky.social podcast:

7 2 0 1
3 weeks ago
Preview
Talk, Judge, Cooperate: Gossip-Driven Indirect Reciprocity in Self-Interested LLM Agents Indirect reciprocity, which means helping those who help others, is difficult to sustain among decentralized, self-interested LLM agents without reliable reputation systems. We introduce Agentic…

6/ Led by Shuhui Zhu with Yue Lin, Shriya Kaistha, Wenhao Li, Baoxiang Wang, Hongyuan Zha, and Pascal Poupart across Waterloo, Vector Institute, CUHK-Shenzhen, and Tongji. arxiv.org/abs/2602.07777

0 0 0 0
3 weeks ago

5/ We don't need AI agents that default to "nice." We need agents that understand when cooperation makes sense and when it doesn't. That takes institutional structure, not just training. Gossip turns out to be surprisingly powerful institutional structure.

0 0 1 0
3 weeks ago

4/ Some chat models did something different and arguably more troubling. They cooperated even when defection was the rational play. That looks like alignment on the surface, but it's cooperation without the reasoning to know when it should stop.

0 0 1 0
3 weeks ago

3/ The surprise: reasoning models defect every time without gossip, exactly as theory predicts. Give them reputational information and they flip to strategic cooperation. They figure out that cooperation pays when others can see what you're doing.

1 0 1 0
3 weeks ago

2/ Our new ALIGN framework gives LLM agents a protocol for sharing reputational information, and that alone sustains cooperation in decentralized systems. Agents praise cooperators, criticize defectors, and adjust their behavior based on what they hear.

0 0 1 0
3 weeks ago

1/ What makes self-interested AI agents cooperate? Not fine-tuning. Not central oversight. Gossip.

3 1 1 0
1 month ago
Preview
How to prevent millions of invisible law-free AI agents casually wreaking economic havoc | Fortune AI developers and investors are looking to create digital economic actors, with the capacity to do just about anything.

4/4 My Fortune piece: fortune.com/2024/10/17/a...

3 0 0 0
1 month ago

3/4 We require businesses to register before they can operate. Shouldn't we expect the same basic legal infrastructure before billions of AI agents start transacting on our behalf?

2 0 1 0
1 month ago

2/4 These agents aren't entering contracts yet. But AI companies are racing to build agents that can buy, sell, manage finances. When they arrive in our markets, we currently have nothing in place. No registration. No verified identities. No accountability.

2 0 1 0
1 month ago

1/4 Moltbook now has 1.4 million AI agents—posting, voting, debating, running crypto scams. Humans can only observe.

It's being called the "singularity." I'd call it a preview of the legal chaos I warned about in Fortune back in 2024. www.forbes.com/sites/guneyy...

4 3 1 0
1 month ago
YouTube
Gillian Hadfield - Alignment is social: lessons from human alignment for AI Current approaches conceptualize the alignment challenge as one of eliciting individual human preferences and training models to choose outputs that that satisfy those preferences. To the extent…

Why this matters for AI: we can't rely on centralized control alone. Studying how communities with different economic systems build stable normative orders helps us extend cooperation to AI—and align AI with human institutions. More: www.youtube.com/watch?v=MPb9...

3 0 0 0
1 month ago

Key speculation: cultural group selection may operate at the level of normative infrastructure, not just norms. The Turkana cooperate across a million people and have adapted to modern tech and state interaction. Institutions that earn confidence and adapt succeed.

1 0 1 0
1 month ago

This confirms predictions from work with Barry Weingast: reliable normative order requires decision-makers to respect constraints on how they decide. That generates confidence for decentralized enforcement. We're among the first to study this in a stateless community.

0 0 1 0
1 month ago
Preview
Metanorms generate stable yet adaptable normative social order in a politically decentralized society Abstract. Norms are essential for social stability but can hinder adaptability in changing environments. Yet human societies have found ways to modify exis

What if "informal" institutions aren't so informal? Communities using elders for disputes are often called informal. We found key markers of legal formality—not in formal sources, but in people's beliefs and behavior. New paper on the Turkana: royalsocietypublishing.org/rstb/article...

4 1 1 0
2 months ago
Apply - Interfolio

Hiring a postdoc for the Normativity Lab at Johns Hopkins (2026 start). Looking for multiagent systems expertise (RL/generative agents) + interdisciplinary background in AI and cognitive science/econ/cultural evolution.
apply.interfolio.com/177701

6 11 0 1
3 months ago

(2/2) Insurers profit by preventing losses, not paying claims—so they'll invest in figuring out what actually makes AI safer. Working with Fathom, we're proposing legislation where government sets acceptable risk levels and private evaluators verify companies meet them.

1 0 0 0