Max Kleiman-Weiner's Avatar

Max Kleiman-Weiner

@maxkw.bsky.social

professor at university of washington and founder at csm.ai. computational cognitive scientist. working on social and artificial intelligence and alignment. http://faculty.washington.edu/maxkw/

4,189 Followers  |  372 Following  |  432 Posts  |  Joined: 13.09.2023  |  2.3561

Latest posts by maxkw.bsky.social on Bluesky

Post image

Forget modeling every belief and goal! What if we represented people as following simple scripts instead (i.e "cross the crosswalk")?

Our new paper shows AI which models others’ minds as Python code πŸ’» can quickly and accurately predict human behavior!

shorturl.at/siUYI%F0%9F%...

03.10.2025 02:24 β€” πŸ‘ 36    πŸ” 14    πŸ’¬ 3    πŸ“Œ 3
Preview
Modeling Others' Minds as Code Accurate prediction of human behavior is essential for robust and safe human-AI collaboration. However, existing approaches for modeling people are often data-hungry and brittle because they either ma...

arXiv: arxiv.org/abs/2510.01272

03.10.2025 05:01 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

New paper challenges how we think about Theory of Mind. What if we model others as executing simple behavioral scripts rather than reasoning about complex mental states? Our algorithm, ROTE (Representing Others' Trajectories as Executables), treats behavior prediction as program synthesis.

03.10.2025 05:01 β€” πŸ‘ 14    πŸ” 2    πŸ’¬ 2    πŸ“Œ 0

Definitely, we should look closer at sample complexity for training but for things like webnav there are massive datasets so could be good fit.

03.10.2025 00:12 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

In some sense, yes, in that you need diverse trajectories of the agent's behavior in different contexts, but you don't need to have access to those goals, or even the distribution, and the agent might be doing non-goal-directed behavior, such as exploration.

02.10.2025 19:49 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Generative Value Conflicts Reveal LLM Priorities Past work seeks to align large language model (LLM)-based assistants with a target set of values, but such assistants are frequently forced to make tradeoffs between values when deployed. In response ...

Great work led by @andyliu.bsky.social and collaborators:
@kghate.bsky.social, @monadiab77.bsky.social, @daniel-fried.bsky.social, @atoosakz.bsky.social
Preprint: www.arxiv.org/abs/2509.25369

02.10.2025 18:37 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

When values collide, what do LLMs choose? In our new paper, "Generative Value Conflicts Reveal LLM Priorities," we generate scenarios where values are traded off against each other. We find models prioritize "protective" values in multiple-choice, but shift toward "personal" values when interacting.

02.10.2025 18:37 β€” πŸ‘ 9    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Very cool! Thanks for sharing! Would be interesting to compare your exploration ideas on open ended tasks beyond little alchemy with EELMA

02.10.2025 05:04 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Estimating the Empowerment of Language Model Agents As language model (LM) agents become more capable and gain broader access to real-world tools, there is a growing need for scalable evaluation frameworks of agentic capability. However, conventional b...

Work led by Jinyeop Song together with Jeff Gore. Check out the preprint here: arxiv.org/abs/2509.22504

01.10.2025 04:27 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Excited by our new work estimating the empowerment of LLM-based agents in text and code. Empowerment is the causal influence an agent has over its environment and measures an agent's capabilities without requiring knowledge of its goals or intentions.

01.10.2025 04:27 β€” πŸ‘ 16    πŸ” 2    πŸ’¬ 3    πŸ“Œ 0

Claire's new work showing that when an assistant aims to optimize another's empowerment, it can lead to others being disempowered (both as a side effect and as an intentional outcome)!

06.08.2025 22:44 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Person standing next to poster titled "When Empowerment Disempowers"

Person standing next to poster titled "When Empowerment Disempowers"

Still catching up on my notes after my first #cogsci2025, but I'm so grateful for all the conversations and new friends and connections! I presented my poster "When Empowerment Disempowers" -- if we didn't get the chance to chat or you would like to chat more, please reach out!

06.08.2025 22:31 β€” πŸ‘ 15    πŸ” 3    πŸ’¬ 0    πŸ“Œ 1

It’s forgivable =) We just do the best we can with what we have (i.e., resource rational) 🀣

31.07.2025 23:56 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Max giving a talk w the slide in OP

Max giving a talk w the slide in OP

lol this may be the most cogsci cogsci slide I've ever seen, from @maxkw.bsky.social

"before I got married I had six theories about raising children, now I have six kids and no theories"......but here's another theory #cogsci2025

31.07.2025 18:18 β€” πŸ‘ 67    πŸ” 9    πŸ’¬ 2    πŸ“Œ 1
Preview
Evolving general cooperation with a Bayesian theory of mind | PNAS Theories of the evolution of cooperation through reciprocity explain how unrelated self-interested individuals can accomplish more together than th...

Quantifying the cooperative advantage shows why humans, the most sophisticated cooperators, also have the most sophisticated machinery for understanding the minds of others. It also offers principles for building more cooperative AI systems. Check out the full paper!

www.pnas.org/doi/10.1073/...

22.07.2025 06:03 β€” πŸ‘ 8    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
Post image

Finally, when we tested it against memory-1 strategies (such as TFT and WSLS) in the iterated prisoner's dilemma, the Bayesian Reciprocator: expanded the range where cooperation is possible and dominated prior algorithms using the *same* model across simultaneous & sequential games.

22.07.2025 06:03 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

Even in one-shot games with observability, the Bayesian Reciprocator learns from observing others' interactions and enables cooperation through indirect reciprocity

22.07.2025 06:03 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

In dyadic repeated interactions in the Game Generator, the Bayesian Reciprocator quickly learns to distinguish cooperators from cheaters, remains robust to errors, and achieves high population payoffs through sustained cooperation.

22.07.2025 06:03 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0
Post image

Instead of just testing on repeated prisoners' dilemma, we created a "Game Generator" which creates infinite cooperation challenges where no two interactions are alike. Many classic games, like the prisoner’s dilemma or resource allocation games, are just special cases.

22.07.2025 06:03 β€” πŸ‘ 9    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

It uses theory of mind to infer the latent utility functions of others through Bayesian inference and an abstract utility calculus to work across ANY game.

22.07.2025 06:03 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We introduce the "Bayesian Reciprocator," an agent that cooperates with others proportional to its belief that others share its utility function.

22.07.2025 06:03 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Classic models of cooperation like tit-for-tat are simple but brittle. They only work in specific games, can't handle noise and stochasticity and don't understand others' intentions. But human cooperation is remarkably flexible and robust. How and why?

22.07.2025 06:03 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

This project was first presented back in 2018 (!) and was born from a collaboration between Alejandro Vientos, Dave Rand @dgrand.bsky.social & Josh Tenenbaum @joshtenenbaum.bsky.social

22.07.2025 06:03 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Evolving general cooperation with a Bayesian theory of mind | PNAS Theories of the evolution of cooperation through reciprocity explain how unrelated self-interested individuals can accomplish more together than th...

Our new paper is out in PNAS: "Evolving general cooperation with a Bayesian theory of mind"!

Humans are the ultimate cooperators. We coordinate on a scale and scope no other species (nor AI) can match. What makes this possible? 🧡

www.pnas.org/doi/10.1073/...

22.07.2025 06:03 β€” πŸ‘ 92    πŸ” 36    πŸ’¬ 2    πŸ“Œ 2

As always, CogSci has a fantastic lineup of workshops this year. An embarrassment of riches!

Still deciding which to pick? If you are interested in building computational models of social cognition, I hope you consider joining @maxkw.bsky.social, @dae.bsky.social, and me for a crash course on memo!

18.07.2025 13:56 β€” πŸ‘ 21    πŸ” 6    πŸ’¬ 1    πŸ“Œ 0

Very excited for this workshop!

17.07.2025 04:42 β€” πŸ‘ 14    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Promotional image for a #CogSci2025 workshop titled β€œBuilding computational models of social cognition in memo.” Organized and presented by Kartik Chandra, Sean Dae Houlihan, and Max Kleiman-Weiner. Scheduled for July 30 at 8:30 AM in room Pacifica I. The banner features the conference theme β€œTheories of the Past / Theories of the Future,” and the dates: July 30–August 2 in San Francisco.

Promotional image for a #CogSci2025 workshop titled β€œBuilding computational models of social cognition in memo.” Organized and presented by Kartik Chandra, Sean Dae Houlihan, and Max Kleiman-Weiner. Scheduled for July 30 at 8:30 AM in room Pacifica I. The banner features the conference theme β€œTheories of the Past / Theories of the Future,” and the dates: July 30–August 2 in San Francisco.

#Workshop at #CogSci2025
Building computational models of social cognition in memo

πŸ—“οΈ Wednesday, July 30
πŸ“ Pacifica I - 8:30-10:00
πŸ—£οΈ Kartik Chandra, Sean Dae Houlihan, and Max Kleiman-Weiner
πŸ§‘β€πŸ’» underline.io/events/489/s...

16.07.2025 20:32 β€” πŸ‘ 12    πŸ” 2    πŸ’¬ 1    πŸ“Œ 2
Post image

'Cross-environment Cooperation Enables Zero-shot Multi-agent Coordination'

@kjha02.bsky.social Β· Wilka Carvalho Β· Yancheng Liang Β· Simon Du Β·
@maxkw.bsky.social Β· @natashajaques.bsky.social

doi.org/10.48550/arX...

(3/20)

15.07.2025 13:44 β€” πŸ‘ 6    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
AI DOOM

AI DOOM

Settling in for my flight and apparently A.I. DOOM is now a movie genre between Harry Potter and Classics. Nothing better than an existential crisis with pretzels and a ginger ale.

29.06.2025 22:52 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image Post image

Thanks to the Diverse Intelligence Community for all these inspiring days & impressions in Sydney πŸ™πŸ» @chriskrupenye.bsky.social @katelaskowski.bsky.social @divintelligence.bsky.social @maxkw.bsky.social

28.06.2025 03:46 β€” πŸ‘ 17    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

@maxkw is following 20 prominent accounts